Current location - Training Enrollment Network - Mathematics courses - How is the optimal solution defined in mathematics?
How is the optimal solution defined in mathematics?
Using the optimality condition, that is, the number of tests of non-basic variables after each iteration, if the maximum problem is solved:

1) When the test numbers of all non-base variables are less than zero, the original problem has a unique optimal solution;

2) When the test number of all non-basic variables is less than or equal to zero, there are infinite optimal solutions.

3) When the test number of any non-base variable greater than zero and its corresponding ajk (denominator of minimum ratio) are less than or equal to zero, the original problem has an unbounded solution;

4) For the problem after adding artificial variables, when the test numbers of all non-base variables are less than or equal to zero, and there are artificial variables in the base variables, the original problem has no feasible solution.

In the mathematical programming problem, the feasible solution of the objective function (maximization problem) is minimized. The feasible solution to minimize the objective function is called the minimum solution, and the feasible solution to maximize it is called the maximum solution.

Both the minimum solution and the maximum solution are called optimal solutions. Therefore, the minimum or maximum value of the objective function is called the optimal value. Sometimes, the optimal solution and the optimal value are also called the optimal solutions of the corresponding mathematical programming problems.

Extended data:

Least square estimation is based on the assumption that the model obeys Gaussian distribution. When m groups of sample observations are randomly extracted from the model population, the most reasonable parameter estimation should make the model fit the sample data best, that is, the sum of squares of the differences between the estimated values and the observed values is the smallest.

For maximum likelihood estimation, when m groups of sample observations are randomly extracted from the model population, the most reasonable parameter estimation value should maximize the probability of extracting m groups of sample observations from the model.

Compared with the maximum likelihood estimation, the maximum posterior estimation only has one more prior probability, which just reflects the Bayesian view that parameters are also random variables. In practice, the prior distribution is usually given by superparameters. Maximum likelihood estimation is actually an example of empirical risk minimization, while maximum posterior estimation is an example of structural risk minimization.

If the sample data is large enough, the maximum posterior probability and the maximum likelihood estimation tend to be consistent, and if the sample data is 0, the maximum posterior is only determined by the prior probability. Although the maximum a posteriori estimation seems to be more perfect than the maximum likelihood estimation, many methods still use it because of its simplicity.

Baidu Encyclopedia-Optimal Solution