Current location - Training Enrollment Network - Mathematics courses - Neural network bp algorithm can predict samples, what exactly?
Neural network bp algorithm can predict samples, what exactly?
On the Arrangement of Neural Network (matlab) Standardization

Because the collected data units are inconsistent, it is necessary to normalize the data [- 1, 1]. There are several normalization methods for your reference: (author James)

1, linear function transformation, the expression is as follows:

Y =(x- min)/(max-min)

Note: x and y are the values before and after conversion, and MaxValue and MinValue are the maximum and minimum values of the sample respectively.

2, logarithmic function transformation, the expression is as follows:

y=log 10(x)

Description: Logarithmic function conversion based on 10.

3. Inverse cotangent function transformation, the expression is as follows:

y=atan(x)*2/PI

Normalization is to speed up the convergence of the training network, and normalization can be omitted.

The specific function of normalization is to induce the statistical distribution of unified samples. The normalization between 0- 1 is a statistical probability distribution, and the normalization between-1-+ 1 is a statistical coordinate distribution. Normalization means identification, unity and unity. Whether modeling or calculation, the basic units of measurement must be the same first. Neural network is trained (calculated) and predicted by the statistical probability of samples in the event, and normalized to the same statistical probability distribution between 0- 1.

When the input signals of all samples are positive, the weights connected with the first hidden layer neurons can only increase or decrease at the same time, resulting in a slow learning speed. In order to avoid this situation and speed up the network learning, we can normalize the input signal, so that the average value of the input signal of all samples is close to zero or very small compared with its mean square error.

Normalization is because the sigmoid function value is between 0 and 1, and so is the output of the last node of the network, so it is often necessary to normalize the output of the sample. Therefore, [0.9 0. 1 0. 1] is better than [1 000] to classify problems.

However, normalization is not always appropriate, and other statistical transformation methods such as standardization may sometimes be better according to the distribution of output values.

Normalization with premnmx statement:

The syntax format of premamx statement is: [pn, minp, maxp, TN, mint, maxt] = premamx (p, t).

Where p and t are the original input and output data respectively, and minp and maxp are the minimum and maximum values of p respectively. Mint and maxt are the minimum and maximum values of t, respectively.

Premnmx function is used to normalize the input data or output data of the network, and the normalized data will be distributed in the interval of [- 1, 1].

If we use normalized sample data when training the network, then the new data used in the future network will also receive the same pretreatment as the sample data, which requires tramnmx.

The following describes tramnmx functions:

[Pn]=tramnmx(P,minp,maxp)

Where P and Pn are the input data before and after the conversion, and maxp and minp are the maximum and minimum values found by the premnmx function, respectively.

(Terry in 2008)

There are three normalization methods in matlab.

1.premnmx、postmnmx、tramnmx

2.restd、poststd、trastd

Plan yourself

The specific method is related to your specific problem.

(by happy)

pm=max(abs(p(i,))); p(i,)=p(i,)/pm;

and

Because i= 1:27

p(i,:=(p(i,)-min(p(i,))/(max(p(i,)-min(p(i,)));

End can be summed up as 1 to 0 1.

0.1+(x-min)/(max-min) * (0.9-0.1), where max and min represent the maximum and minimum values of the sample respectively.

This can be simplified to 0. 1-0.9.