Probability theory requires students to be familiar with the concept of probability, and knowledge points are generally not difficult. The following summary of knowledge points of probability theory is what I want to share with you. Welcome to browse.
Summary of knowledge points of probability theory Chapter 1 Basic concepts of probability theory
1. Randomized trial
Deterministic phenomenon: the inevitable phenomenon in nature is called deterministic phenomenon.
Random phenomenon: it presents uncertainty in individual experiments and statistical regularity in a large number of experiments. This phenomenon is called
This is a random phenomenon.
Random experiment: the experiment to study the statistical law of random phenomena is a random experiment.
Characteristics of random test: 1) Repeatable under the same conditions;
2) There is more than one possible result of each test, and all the possibilities of the test can be defined in advance.
Results;
3) Not sure which result will appear first before an experiment;
2. Sample space, random events
Sample space: We call the set of all possible results of random test E the sample space of E, which is marked as S ... Sample point: each result in E, which constitutes the element of the sample space, is called a sample point.
The basic relations between events: including, equality, sum event (union), product event (intersection) and difference event (A-B: including A).
Not including b), mutually exclusive events (intersection is an empty set, union is not necessarily a complete set), opposition.
Events (intersection is an empty set and union is a complete set, which is called opposite events).
Arithmetic between events: exchange law, association law, distribution rate, Morgan theorem (these theorems are understood through Wayne diagram)
3. Frequency and probability
Frequency: the number of times that Event A occurs.
Frequency: frequency/total
Probability: when the number of repeated experiments n increases gradually, the frequency value will tend to a stable value, which is probability. The characteristic of probability: 1) is nonnegative. 2) Normative. 3) Countable additivity.
Probability property: 1)P (empty set) =0, 2) finite additivity, 3) addition formula: P(A+B)=P(A)+P(B).
-P(AB)
4. Classical probability
The probability of learning to use the knowledge of permutation and combination to solve some simple problems (lottery problem, hypergeometric distribution, distribution problem,
Insertion problems, binding problems, etc. )
5. Conditional probability
Definition: under the condition that an event P(B|A)=P(AB)/P(A), the probability that B occurs.
Multiplication formula: P(AB)=P(B|A)P(A)
Total probability formula and Bayesian formula
6. Independence test
Let a and b be two events. If the equation holds,
P(AB)=P(A)P(B)
Events a and b are considered to be independent of each other, which is called independence for short.
Chapter two. Random variables and their distribution.
1. random variable
Definition: Let the sample space of random test be S={e}. X=X(e) is a single-valued function defined on the sample space S, called.
X=X(e) is a random variable.
2. Discrete random variables and their distribution law
Distribution of three discrete random variables
1)(0 1). E(X)=p,D(X )=p( 1-p)
2) Bernoulli test, binomial distribution E(X)=np, D(X)=np( 1-p)
3) Poisson distribution P(X=k)= (? ^k)e^(-? )/k! (k=0, 1,2,)
E(X)=? ,D(X)=?
Note: When n in binomial distribution is large, it can be approximately regarded as Poisson distribution, that is, np=?
3. Distribution function of random variables
Definition: let x be a random variable and x be an arbitrary real number, a function.
F(x)=P(X? X) belongs to a distribution function called x.
Properties of distribution function:
1) F(x) is an irreducible function.
2) 0? F(x)? 1
Solving the distribution function of discrete random variables (solving the distribution function by distribution law)
Solution of distribution function of continuous random variables (distribution function is solved by image and probability density of distribution function)
Solution distribution function)
4. Continuous random variables and their probability density
The distribution function of a continuous random variable is equal to the derivative of its probability density function from negative infinity to x at the upper bound of the variable, such as the generalized integral inverse density function, which corresponds to the distribution function in the interval.
Properties of density function: 1)f(x)? 0
2) The generalized integral of the density function from negative infinity to positive infinity is equal to 1.
The distribution of three continuous random variables: 1) is the same as the distribution e (x) = (a+b)/2d (x) = [(b-a) 2]/12.
2) Exponential distribution E(X)=? D(X)=? 2
3) General formula of normal distribution (standard normal distribution)
5. Distribution of random variable functions
1) Use the distribution function of the known random variable x to solve the distribution function of Y=g(X).
2) The density function of random variable X is known, and the density function of Y=g(X) is solved.
Chapter 3: Multidimensional random variables and their distribution (mainly discussing the distribution of 2D random variables).
1. Two-dimensional random variable
Let (x, y) be a two-dimensional random variable. For any real number x, y, a binary function.
F(x,Y)=P[(X? X) cross (y? Y)] is called the distribution function of two-dimensional random variables (x, y) or the joint distribution function of random variables.
Distribution function and density function of discrete random variables
Distribution function and density function of continuous random variables
Focus on mastering the method of solving distribution function by using double integral
2. Edge distribution
Marginal probability of discrete random variables
Marginal probability density of continuous random variables
3. Independent random variables
If x and y are independent of each other, then the joint probability density of x and y is equal to the product of their respective edges.
5. Distribution of distribution functions of two random variables
The key is to solve the probability density of z = x+y with convolution formula.
Chapter four. Numerical characteristics of random variables.
1. Expected value of mathematics
Solving Mathematical Expectations of Discrete Random Variables and Continuous Random Variables
Mathematical expectation of six distributions
2. Difference
Variance of continuous random variables
D(X)=E(X^2)-[E (X )]^2
Basic properties of variance:
1) Let c be a constant, then D(C)=0.
2) Let x be a random variable and c be a constant, then there is
D(CX)=C^2D(X)
3) Let x and y be two random variables, then there are
D(X+Y)= D(X)+D(Y)+2e {(x-e (x)) (y-e (y)} Especially, if x and y are not related, there is d (x+y) = d (x)+d (y) Chebyshev.
3. Covariance and correlation coefficient
Covariance: Cov(X, Y )= E{(X-E(X))(Y-E(Y))}
Correlation coefficient: m=Cov(x, y)/? D(X)? D(Y)
When the correlation coefficient is equal to 0, X and Y are uncorrelated, and CoV (X, Y) is equal to 0, which is irrelevant and not necessarily independent, but independence must be irrelevant.
;