Formula analysis: where μ is the mean and σ is the standard deviation. μ determines the position of normal distribution. The closer to μ, the greater the probability of being taken, and vice versa. σ describes the dispersion degree of normal distribution. The greater the σ, the more dispersed the data distribution and the flatter the curve. The smaller σ is, the more concentrated the data distribution is and the steeper the curve is.
Historical development of normal distribution;
The concept of normal distribution was first put forward by French mathematician De Moivre in 1733, and was first applied to astronomical research by German mathematician Gauss, so it is also called Gaussian distribution. Gauss's work had a great influence on later generations, and he also named it "Gauss Distribution". It is precisely because of this work that later generations attributed the invention right of the least square method to him.
The German 10 DM banknotes with Gaussian heads are also printed with normal distribution density curves. This conveys an idea, which has the greatest influence on human civilization among all the scientific contributions of Gauss. At the beginning of Gauss's discovery, perhaps people can only evaluate its superiority from the simplification of its theory, and can't fully see its full influence.
It was not until the 20th century that the normal small sample theory was fully developed. Laplace quickly understood Gauss's work and immediately related it to his central limit theorem. To this end, he made a little supplement to an article to be published (published in 18 10), pointing out that if the error can be regarded as the superposition of many quantities, according to his central limit theorem, the error should have Gaussian distribution.
This is the first time in history to mention the so-called "meta-error theory"-error is the superposition of a large number of meta-errors caused by various reasons. Later, in 1837, G. Hagen formally put forward this theory in a paper.
In fact, his form has considerable limitations: Hagen imagines the error as the sum of a large number of independent and identically distributed "meta-errors", each of which takes two values, and its probability is 1/2. Therefore, according to De Moivre's central limit theorem, it is immediately concluded that the error (approximation) obeys the normal distribution.
This point pointed out by Laplace is of great significance, because he gave a more natural, reasonable and convincing explanation to the normal error theory. Because Gauss's statement smacks of circular argument: because the arithmetic average is excellent, the deduction error must obey the normal distribution.
On the other hand, the superiority of arithmetic mean and least square estimation is deduced from the latter conclusion, so one of them (the superiority of arithmetic mean and the normality of error) must be taken as the starting point. However, there is no reason to establish the arithmetic average by yourself. Taking this as the starting point of theoretical presupposition, there are still some shortcomings in the end Laplace's theory is of great significance for connecting this broken link and making it a harmonious whole.