(2) Draw the feasible region represented by the constraint conditions.
(3) Finding the optimal solution of the objective function in the feasible region [edit this paragraph] The development of linear programming French mathematicians J.-B.-J. Fourier and C. Valle-Posen independently put forward the idea of linear programming in 1832 and 19 1 1 respectively, but neither of them.
1939, the Soviet mathematician лв Kantorovich put forward the linear programming problem in the book Mathematical Methods in Production Organization and Planning, which also did not attract attention.
1947, American mathematician G.B. Danzig put forward a general mathematical model of linear programming and a general method for solving linear programming problems-simplex method, which laid the foundation of this subject.
1947, American mathematician J.von Neumann put forward duality theory, which opened up many new research fields of linear programming and expanded its application scope and problem-solving ability.
195 1 year, American economist T.C. Kupmans applied linear programming to the economic field, for which he and Kantrovich jointly won the 1975 Nobel Prize in Economics.
Since 1950s, people have done a lot of theoretical research on linear programming, and a large number of new algorithms have emerged. For example, 1954 C. Lemcke proposed the dual simplex method, 1954 S. Gass and T. Sadie solved the sensitivity analysis and parametric programming problem of linear programming, 1956 A. Tucker proposed the complementary relaxation theorem, 1960 G. B. Danzig and.
The research results of linear programming also directly promote the algorithm research of other mathematical programming problems, including integer programming, stochastic programming and nonlinear programming. With the development of digital computer, many linear programming softwares have appeared, such as MPSX, OPHEIE, predictor and so on. Can easily solve the linear programming problem of thousands of variables.
1979, the Soviet mathematician L. G. Khachian proposed an ellipsoid algorithm for solving linear programming problems, and proved that it is a polynomial time algorithm.
1984, N. Kamaka, an Indian mathematician at Bell Telephone Laboratory in the United States, proposed a new polynomial time algorithm for solving linear programming problems. When the number of variables is 5000, only 1/50 of the simplex method time is needed to solve the linear programming problem. The polynomial algorithm theory of linear programming has been formed. Since 1950s, the application scope of linear programming has been expanding. Method of establishing linear programming model [Edit this paragraph] The modeling of linear programming generally has the following three steps:
1. Look for decision variables according to the factors that affect the purpose to be achieved;
2. The objective function is determined by the functional relationship between decision variables and objectives;
3. Constraints to be satisfied by decision variables are determined by constraints.
The established mathematical model has the following characteristics:
1, each model has several decision variables (x 1, x2, x3 ..., xn), where n is the number of decision variables. A set of values of decision variables represents a scheme, and decision variables are generally non-negative.
2. The objective function is a linear function of decision variables, which can be maximization (max) or minimization (min) according to specific problems, both of which are collectively called optimization (opt).
3. Constraints are also linear functions of decision variables.
When the objective function of the obtained mathematical model is a linear function and the constraint conditions are linear equality or inequality, the mathematical model is called a linear programming model.
Example:
Production scheduling model: a factory should arrange to produce two kinds of products, I and II, and know the number of equipment hours and the consumption of two kinds of raw materials, A and B, as shown in the table. The right column in the table is the daily equipment capacity and the limit of raw material supply. Factory production of 1 unit product I can benefit 2 yuan, and production of 1 unit product II can benefit 3 yuan. How to arrange production to maximize its benefits?
Solution:
1. Determine decision variables: let x 1 and x2 be the production quantities of products I and II;
2. Make clear the objective function: profit maximization, that is, find the maximum value of 2x 1+3x2;
3, meet the constraints:
Equipment limitation: x 1+2x2≤8.
Limit value of raw material A: 4x 1≤ 16
Limit value of raw material B: 4x2≤ 12
Basic requirements: x 1, x2≥0.
If max is used to replace the maximum value and S.T. (short for subject to) is used to replace the constraint conditions, the model can be written as:
Max z=2x 1+3x2
Standard x 1+2x2≤8
4x 1≤ 16
4x2≤ 12
X 1, x2≥0 [Edit this paragraph] The basic method for solving linear programming problems is the simplex method. Now there is a standard software of simplex method, which can solve linear programming problems with more than 10000 constraints and decision variables on the computer. In order to improve the speed of solving problems, there are modified simplex method, dual simplex method, original dual method, decomposition algorithm and various polynomial time algorithms. Simple linear programming problems with only two variables can also be solved graphically. This method is only suitable for linear programming problems with only two variables. Its characteristics are intuitive and easy to understand, but its practical value is not great. Some basic concepts of linear programming can be understood by diagrams.
For general linear programming problems:
Minimum z=CX
Science and technology.
AX =b
X & gt=0
Where a is an m*n matrix.
If row a is full
You can find the basic matrix b and the initial basic solution.
N represents the non-base matrix corresponding to B, then the programming problem 1 can be simplified as:
Planning question 2:
Minimum z=CB XB+CNXN
Science and technology.
B XB+N XN = b ( 1)
XB & gt; = 0,XN & gt; = 0 (2)
(1) both sides are multiplied by B- 1, so.
XB + B- 1 N XN = B- 1 b
At the same time, XB = B- 1 b-B- 1 N XN in the above formula is also substituted into the objective function, and the problem can be further transformed into:
Planning question 3:
The minimum z = CB-1b+(cn-CB-1n) xn.
Science and technology.
XB+B- 1N XN = B- 1b( 1)
XB & gt; = 0,XN & gt; = 0 (2)
Let N:=B- 1N, b:= B- 1 b, ζ ζ = CB-1b, σ = CN-CB- 1n, then the above problem is transformed into the planning problem form 4:
Minimum z = zeta+σ xn
Science and technology.
XB+XN north = b (1)
XB & gt; = 0,XN & gt; = 0 (2)
In the above transformation, if we can find the programming problem form 4, B >: =0, this form is called the initial basic solution form.
The above transformation is equivalent to multiplying the entire expansion matrix (including C and A) by the augmented matrix. So it is very important to choose B, so as to find out the corresponding CB.
If there is an initial basic solution
If σ > = 0
Then z > =ζ. At the same time, let XN = 0 and XB = b, which is a feasible solution. At this time, z=ζ, that is, the optimal value is reached. So the optimal solution can be obtained at this time.
If σ > = 0 is not true.
Simplex table transformation can be used.
There is a component.
If pj < =0 does not hold.
Then Pj has at least one component ai, and j is positive. Multiply the matrix t on both sides of the constraint condition (1) of programming problem 4.
T=
Then, after the transformation, the decision variable xj becomes the basic variable, replacing the original basic variable. To make T b > = 0 and T Pj=ei (where ei stands for the i-th unit vector), you need:
l ai,j & gt0。
lβq+βI *(AQ,j/ai,j)& gt; =0, where q! = me. That is β q > =βi/ ai,j * aq,j .
If AQ J.
N If aq, j>0, you need β q/AQ, j >;; =βi/ ai, J. Therefore, I should be chosen to minimize βi/ ai, J.
If this method determines multiple subscripts, please choose the one with the smallest subscript.
After the transformation, we get the form of programming problem 4, and continue to judge σ. Because there are finite basic solutions, we can definitely jump out of the loop in finite steps.
If for each I, AI, J.
The optimal value is unbounded.
If you can't find the initial basic solution
No solution.
If a is not full rank
Almost to the A-line full level, and then to the A-line full level ... [Edit this paragraph] The application of linear programming in various management activities of enterprises, such as planning, production, transportation, technology, etc. Linear programming refers to choosing the most reasonable calculation method from the combination of various constraints and establishing a linear programming model to obtain the best results.