Current location - Training Enrollment Network - Mathematics courses - Probability and statistical history
Probability and statistical history
Historical probability theory of probability theory is a branch of mathematics that studies the laws of random phenomena. It originated in the middle of17th century. At that time, in the fields of error, demography, life insurance, etc., a large number of random data needed to be sorted out and studied, and thus a kind of mathematics specialized in studying the regularity of a large number of random phenomena was born. But at that time, mathematicians first thought of the problem of probability theory, but it came from gamblers. Mathematician Fermat put forward the following question to French mathematician Pascal: "At present, two gamblers have an appointment to gamble several games, and whoever wins the S game first will win. When gambler a wins a game [A

Another founder who made probability theory a branch of mathematics was Swiss mathematician Jacob Bernoulli [1654- 1705]. His main contribution is to establish the first limit theorem in probability theory, which we call Bernoulli's theorem of large numbers, that is, "in repeated experiments, the frequency tends to be more stable." This must be a more reasonable statement after his death, that is, 17 13, published in his suicide note "guessing riddles".

1730, the French mathematician De Moivre published his book Analysis Essays, including the famous De Moivre-Laplace Theorem. This is the prototype of the second basic limit theorem of probability theory. Then Laplace clearly defined the probability in the Theory of Probability Analysis published by 18 12. In addition, he also established the theory of "normal distribution" and "least square method" with several mathematicians. Another representative figure in the history of probability theory is Poisson of France. He extended the law of large numbers to Bernoulli form and studied a new distribution, namely Poisson distribution. After them, the focus of probability theory is to popularize and perfect Bernoulli's law of large numbers and central limit theorem.

With the development of probability theory to 190 1 year, the central limit theorem has finally been strictly proved, and post-mathematicians are using this theorem to scientifically explain why many random variables encountered in practice obey normal distribution for the first time. In 1930s, people began to study stochastic processes, and the famous Markov process theory was based on 193 1. Andrei Andrey Kolmogorov, a mathematician in the Soviet Union, also made great contributions to the development of probability theory. In modern times, there have been branches of theoretical probability and applied probability, and probability theory has been applied to different categories, thus developing different disciplines. Therefore, modern probability theory has become a very huge branch of mathematics.

The historical origin of probability theory is to study the possibility of things happening, but its original origin is related to gambling.

/kloc-in the 6th century, Italian scholar cardano began to study some simple problems in gambling such as dice. Some concepts and simple methods of probability statistics were mainly used in gambling and demographic models in the early days.

With the social practice of human beings, people need to understand the inevitable regularity implied in various uncertain phenomena and study the possibility of various results by mathematical methods, thus generating probability theory and gradually developing into a rigorous discipline. The method of probability and statistics has penetrated into various fields day by day, and is widely used in natural science, economics, medicine, financial insurance and even humanities.

Development With the development of science in 18 and 19 centuries, people noticed some similarities between some biological, physical and social phenomena and games of chance, so probability theory originated from games of chance was applied to these fields, which greatly promoted the development of probability theory itself. The founder who made probability theory a branch of mathematics was Swiss mathematician Bernoulli, who established the first limit theorem in probability theory, namely Bernoulli's law of large numbers, and expounded the probability that the frequency of an event is stable to it.

Then de Morville and Laplace derived the original form of the second basic limit theorem (central limit theorem). On the basis of summarizing predecessors' work systematically, Laplace wrote the Theory of Probability of Analysis, gave a clear classical definition of probability, and introduced more powerful analytical tools into probability theory, pushing it to a new development stage.

At the end of 19, Russian mathematicians Chebyshev, Markov, Lyapunov and others established the general forms of the law of large numbers and the central limit theorem by analytical methods, and scientifically explained why many random variables encountered in practice obey the normal distribution approximately. At the beginning of the 20th century, influenced by physics, people began to study stochastic processes.

In this regard, André Andrey Kolmogorov, Wiener, Markov, Qin Xin, Levi and Ferrer all made outstanding contributions. Extended data probability theory is a branch of mathematics that studies the quantitative laws of random phenomena.

Random phenomena are relative to decisive phenomena. The phenomenon that a certain result must occur under certain conditions is called decisive phenomenon.

For example, at standard atmospheric pressure, when pure water is heated to 100℃, water will inevitably boil. Random phenomenon means that under the same basic conditions, before each experiment or observation, it is uncertain what kind of results will appear, showing contingency.

For example, when you flip a coin, there may be heads or tails. The realization and observation of random phenomena are called random experiments.

Every possible result of random test is called a basic event, and a basic event or a group of basic events is collectively called a random event, or simply called an event. Typical random experiments include dice, coins, playing cards and roulette.

The probability of an event is a measure of the possibility of an event. Although the occurrence of an event in random trials is accidental, those random trials that can be repeated in large numbers under the same conditions often show obvious quantitative laws.

References:

Baidu encyclopedia-probability theory.

Historical story of probability:

The first person to calculate the probability systematically was cardano in the 6th century/kloc-0. This is recorded in his book. The content of probability in the book was translated from Latin by Gould.

Cardano's mathematical works contain many suggestions for gamblers. These suggestions are all written in the essay. However, it was in a series of letters between Pascal and Fermat that a systematic study of probability was first put forward.

These communications were originally put forward by Pascal, who wanted to ask Fermat some questions about Chevalier de Meyer. Chevalier de Meyer is a famous writer, an outstanding figure in the court of Louis XIV, and an avid gambler. There are two main problems: the problem of rolling dice and the problem of bonus distribution in the competition.

Probability is a numerical value to measure the possibility of accidental events. If the experiment is repeated many times, there will be several accidents. Take x as the denominator and y as the numerator to form a numerical value.

In many experiments, P is relatively stable at a certain value, and P is called the probability of a certain occurrence. If the probability of accidental events is determined through long-term observation or a large number of repeated experiments, it is statistical probability or empirical probability.

Extended data:

As the problems people encounter become more and more complicated, the equal possibility gradually exposes its weakness, especially for the same event, different probabilities can be calculated from different equal possibility angles, resulting in various paradoxes.

On the other hand, with the accumulation of experience, people gradually realize that when doing a large number of repeated experiments, with the increase of the number of experiments, the frequency of an event always swings around a fixed number, showing certain stability.

R.von mises defines this number as the probability of an event, which is the frequency definition of probability. Theoretically, the frequency definition of probability is not rigorous enough.

Baidu encyclopedia-probability

History of Probability History of Probability: The first person who systematically calculated probability was cardano in16th century.

This is recorded in his book. The content of probability in the book was translated from Latin by Gould.

Cardano's mathematical works contain many suggestions for gamblers. These suggestions are all written in the essay.

However, it was in a series of letters between Pascal and Fermat that a systematic study of probability was first put forward. These communications were originally put forward by Pascal, who wanted to ask Fermat some questions about Chevalier de Meyer.

Chevalier de Meyer is a famous writer, an outstanding figure in the court of Louis XIV, and an avid gambler. There are two main problems: the problem of rolling dice and the problem of bonus distribution in the competition.

Probability is a numerical value to measure the possibility of accidental events. If the experiment is repeated many times, there will be several accidents.

Take x as the denominator and y as the numerator to form a numerical value. In many experiments, P is relatively stable at a certain value, and P is called the probability of a certain occurrence.

If the probability of accidental events is determined through long-term observation or a large number of repeated experiments, it is statistical probability or empirical probability. Extended data:

As the problems people encounter become more and more complicated, the equal possibility gradually exposes its weakness, especially for the same event, different probabilities can be calculated from different equal possibility angles, resulting in various paradoxes.

On the other hand, with the accumulation of experience, people gradually realize that when doing a large number of repeated experiments, with the increase of the number of experiments, the frequency of an event always swings around a fixed number, showing certain stability. R.von mises defines this number as the probability of an event, which is the frequency definition of probability.

Theoretically, the frequency definition of probability is not rigorous enough. Baidu encyclopedia-probability.

Kneeling for the development history of probability theory 19 to the 20th century, online equal probability theory is a branch of mathematics that studies the quantitative laws of random phenomena.

Random phenomenon refers to such an objective phenomenon, when people observe it, its result can not be determined in advance, but can only be one of many possible results. In nature and human society, there are a lot of random phenomena.

For example, when you flip a coin, there may be heads or tails; When measuring the length of an object, because the instrument and observation are affected by the environment, the measurement results may be different each time; The life of bulbs produced under the same process conditions is uneven; Wait a minute. These are random phenomena.

The realization and observation of random phenomena are called random experiments, and every possible result of random experiments is called a basic event. A basic event or a group of events is also called a random event, or simply an event. The probability of an event is a measure of the possibility of an event.

Although the occurrence of an event in random trials is accidental, those random trials that can be repeated in large numbers under the same conditions often show obvious quantitative regularity. People gradually realize some of these laws in long-term practice and apply them in practice.

For example, if you throw a uniform coin several times in a row, the frequency of heads (the ratio of the number of occurrences to the number of throws) will gradually stabilize at 1/2 as the number of throws increases. Another example is to measure the length of an object for many times. With the increase of the number of measurements, the average value of the measurement results gradually stabilizes at a constant, and most of the measured values fall near this constant, and the farther away they are, the smaller they are, so their distribution presents "big in the middle and small at both ends" and some degree of symmetry (that is, approximate normal distribution).

The law of large numbers and the central limit theorem describe and demonstrate these laws. In practice, people often need to study the evolution of a specific random phenomenon in the process of time advancement, and what describes this evolution is the random process in probability theory.

For example, the number of calls received by a telephone exchange from a certain moment to every moment thereafter is a random process. For another example, the irregular motion (Brownian motion) of tiny particles in liquid due to random collision of surrounding molecules is also a random process.

It is the main topic of modern probability theory to study the statistical characteristics of stochastic processes and calculate the probability of some events related to the process, especially the problems related to the sample trajectory of the process (that is, the one-time realization of the process). In a word, probability theory is closely related to practice and is widely used in natural science, technical science, social science, military and industrial and agricultural production.

Probability theory is also the theoretical basis of mathematical statistics. A brief history of development probability theory has a long history, and its origin is related to game problems.

16th century, some Italian scholars began to study some simple problems in gambling, such as rolling dice, such as comparing the possibility that the total number of points of two dice is 9 or 10. /kloc-In the mid-7th century, French mathematicians B Pascal, P de Fermat and Dutch mathematician C Huygens studied some complicated gambling problems based on the method of permutation and combination (see combinatorial mathematics), and they solved the problems of "rational allocation of bets" (that is, "scoring problem", see probability) and "losing money".

Its method is not to directly calculate the gambler's winning probability, but to calculate the expected winning value, which leads to the concept of mathematical expectation (explicitly put forward by Huygens). The real founder of probability theory as a branch of mathematics is Swiss mathematician Jacob Bernoulli, who established the first limit theorem in probability theory, namely Bernoulli's law of large numbers. The theorem asserts that if the probability p(a)=p(0 probability) of event A, it should be understood as a measure of the probability of event occurrence, that is, an axiomatic probability measure (see below for details).

17 16 or so, A. de moivre used his words about n! The asymptotic formula of (the so-called Stirling formula) further proves that it obeys the normal distribution asymptotically (German mathematician C.F. Gauss re-deduced the normal distribution when studying the measurement error theory in 1809, so it is also called Gaussian distribution). De moivre's result was later extended to the original form of the second basic limit theorem (see central limit theorem) in general p(0 probability theory) by French mathematician P-S Laplace.

Laplace made great contributions to the development of probability theory. On the basis of summarizing predecessors' work systematically, he wrote The Theory of Probability Analysis (published in 18 12, and reprinted for the last six times).

In this book, he clearly defined the classical definition of probability (usually called classical probability, see probability) for the first time, and introduced more powerful analysis tools in probability theory such as difference equation and generating function, thus realizing the transition from simple combinatorial calculation to analytical method and pushing probability theory to a new development stage. Laplace attached great importance to the practical application of probability theory, especially interested in demography.

After Laplace, the central research topic of probability theory is to popularize and perfect Bernoulli's law of large numbers and de Morville-Laplace limit theorem. In this regard, the Russian mathematician Chebyshev took a decisive step. 1866, he established the law of large numbers about the sequence of independent random variables with his own Chebyshev inequality.

The following year, the central limit theorem of independent random variable sequence with uniformly bounded absolute moments of all orders is established. However, its proof is not strict, and it was later supplemented by A.A. Markov in 1898. 190 1 year α. м. Lyapunov proved the central limit theorem of a wide range of independent random variable sequences by using the characteristic function method.

He also used this theorem to explain scientifically for the first time why many random variables encountered in practice obey normal distribution approximately. After Lyapunov, α. я. Qin Xin, Alpha. η Andre Andrey Kolmogorov, P Levi and W Ferrer have made great contributions to the limit theory of random variable sequences.

By 1930s, the limit theory of independent random variable sequence had been perfected. During this period, due to the needs of practical problems, especially the needs of physics, people began to study randomness.

What is the history of statistics? The word "statistics" is used in English. When used as a plural noun, it indicates statistical data. When used as a singular noun, it means statistics.

Generally speaking, the word statistics includes three meanings: statistical work, statistical data and statistics. They have a close relationship. Statistical data is the result of statistical work, and statistics come from statistical work.

The original statistical work, that is, the original data collected by people, has a history of thousands of years, and as a science, it began in the17th century. Statisticians and statisticians are the same in English, but statistics are not directly produced from the experience summary of statistical work.

Every science has its establishment, development and objective conditions, and statistical science is a marginal subject which is synthesized, refined and developed by statistical work experience, socio-economic theory and econometric methods. 1, the word statistics comes from the investigation of national conditions, and its original meaning is the study of national conditions.

In the17th century, people were interested in British "political arithmetic". In 1662, John Graunt published his first and only manuscript, Natural and Political Observations on Mortality Bill, which analyzed the ratio of boys to girls and developed the mortality table currently used by insurance companies.

/kloc-In the middle of the 8th century, German scholar gottfried Akenwall founded English Statistics. It comes from national status and German political arithmetic. It was first used by John Sinclair and appeared in the Encyclopedia Britannica in 1797. (In the early days, there was a word "publicitics" and "statistics" competing for the meaning of "statistics". If it won, it would be popular now. ).

2. Regarding Gaussian distribution or normal distribution 1733, De Moivre gave a normal curve in an article distributed to friends (this history was ignored at first), and Laplace proposed that the normal curve equation is suitable for expressing the probability of error distribution. 1809, Gauss published his masterpiece on the theory of celestial bodies. In the third section of the second volume of this book, he deduced that the normal curve is suitable for expressing the error law, and admitted Laplace's earlier derivation.

Normal distribution was popularized in the first half of19th century because of the work of Gaussian, so it is usually called Gaussian distribution. Karl Pearson pointed out that Demovo was the founder of normal curve, and he was the first to call it normal distribution, but people still used to call it Gaussian distribution.

3. Regarding the least square method 1805, Legendre proposed the least square method, which Gauss claimed he used in 1794, and gave a strict derivation based on the assumption of Gaussian distribution of errors in 1809. 4./kloc-other important developments in three different fields in the middle of the 9th century are based on the premise that randomness is inherent in nature.

Adolf Kettlet (A. Quetlet, 1869) used the concept of probability to describe sociological and biological phenomena (the normal curve extends from observation error to various data). Mendel (G.Mendel, 1870) formulated his genetic law Boltzmann (1870) through a simple random structure. 1859, Darwin published the Origin of Species. Darwin's work had a profound influence on his cousin Sir Gordon, who knew mathematics better than Darwin. He began to use probability tools to analyze biological phenomena and made important contributions to the foundation of biostatistics (you can call him the father of bioinformatics). Sir Gordon was the first person to use the two important concepts of correlation and regression. He was also the median and percentile.

Influenced by Gordon's work, karl pearson, who works in University College London, began to apply mathematics and probability theory to Darwin's theory of evolution, thus creating a modern statistical era and winning the title of the father of statistics. The first issue of Biometrika was published in1901(Kaperson was one of the founders). In the early literature, we can find clear examples of sampling from the population, but we often lack the understanding that we can only get samples from the population.

From the time of K. Pearson to the end of19th century, the difference between sample and population is well known, but it is not always observed. -19 10 Yule pointed out in his textbook.

In the early days of 1900, this difference became more obvious, which was especially emphasized by Fisher in 1922. In an important paper "On the Mathematical Basis of Theoretical Statistics" published in 1922, Fisher explained the connection and difference between the concepts of population and sample, and laid the foundation of "Theoretical Statistics".

6. Expectation, standard deviation and variance expectation are more primitive concepts than probability. In the seventeenth century, in Pascal and Fermat's time, the concept of expectation had been accepted. Pearson first defined the concept of standard deviation.

In 19 18, Fisher introduced the concept of variance. The similarity between the moment in mechanics and the mean value in statistics has already been noticed by workers in the field of probability theory. K. Pearson first used the statistical "moment" in 1893.

7. Chi-square statistics Chi-square statistics is proposed by Karl-Pearson to test whether the known data comes from a specific stochastic model or whether the known data is consistent with a given hypothesis. Chi-square test is considered as one of the 20 frontier inventions in all branches of science and technology since 1900, and even the old enemy Fisher has a high evaluation of it.

8. Moment estimation and maximum likelihood Ka-Pearson proposed a method to estimate parameters by using moments. Fisher proposed the maximum likelihood estimation method from 19 12 to 1922. Based on intuition, he put forward the concepts of consistency, validity and sufficiency of estimation.

9. Axiomatization of Probability 1933. The mathematician Kolmogorov of the former Soviet Union published the basic concepts of probability theory, which laid a strict mathematical foundation for probability theory. 10, Bayesian Theorem Bayes has hardly made any contribution to statistics, but an article by Bayes has become the focus of Bayesian school's statistical thinking mode. This article was published in 1763 by Richard Pri, a good friend of Bayes and a famous pioneer of life insurance principles.