Current location - Training Enrollment Network - Mathematics courses - What is optimization?
What is optimization?
Optimization is a branch of applied mathematics, which mainly refers to the method of choosing a research scheme to achieve the optimal goal under certain conditions. Optimization problems are widely used in military, engineering, management and other fields.

Common methods? : 1. Gradient descent method is the earliest, simplest and most commonly used optimization method.

The gradient descent method is simple to realize. When the objective function is convex, the solution of gradient descent method is global Generally speaking, its solution is not guaranteed to be the global optimal solution, and the speed of gradient descent method is not necessarily the fastest.

The optimization idea of gradient descent method is to take the negative gradient direction of the current position as the search direction, which is also called "steepest descent method" because it is the fastest descent direction of the current position. The steepest descent method is closer to the target value, the smaller the step size, and the slower the progress.

2. Newton method and quasi-Newton method (1) Newton method: Newton method is an approximate method to solve equations in real number domain and complex number domain. Methods The first few terms of Taylor series of function f(x) were used to find the root of equation f(x) = 0. The biggest feature of Newton's method is its fast convergence.

(2) Quasi-Newton method: Quasi-Newton method is one of the most effective methods to solve nonlinear optimization problems. The basic idea is that the improved Newton method needs to solve the inverse of complex Hessian matrix every time. It uses positive definite matrix to approximate the inverse of Hessian matrix, thus simplifying the complexity of operation. The quasi-Newton method, like the steepest descent method, only needs to know the gradient of the objective function at each iteration.

By measuring the change of gradient, an objective function model is constructed to make it superlinearly converge. This method is much better than the steepest descent method, especially for difficult problems. In addition, the quasi-Newton method is sometimes more effective than Newton method because it does not need the information of the second derivative. Nowadays, the optimization software contains a large number of quasi-Newton algorithms to solve unconstrained, constrained and large-scale optimization problems.

3.*** Conjugate gradient * * Conjugate gradient method is a method between steepest descent method and Newton method. It only needs the first derivative information, but it overcomes the shortcomings of slow convergence of steepest descent method and Newton method, which need to store and calculate Hesse matrix for inversion. The conjugate gradient method is not only one of the most useful methods for solving large-scale linear equations, but also the most effective method for solving large-scale nonlinear optimization.

Among various optimization algorithms, the yoke gradient method is a very important one. Its advantages are small storage capacity, step-by-step convergence, high stability and no need for any external parameters.

4. Heuristic optimization method Heuristic refers to a method that people discover according to empirical rules when solving problems. It is characterized by using past experience to choose effective methods when solving problems, rather than seeking answers systematically and step by step.

There are many heuristic optimization methods, including classical simulated annealing, genetic algorithm, ant colony algorithm and particle swarm optimization algorithm.

5. As an optimization algorithm, Lagrange multiplier method is mainly used to solve constrained optimization problems. The basic idea is to transform the constrained optimization problem with n variables and k constraints into the unconstrained optimization problem with (n+k) variables by introducing Lagrange multipliers. The mathematical meaning behind Lagrange multiplier is that it is the coefficient of each vector in the linear combination of gradient of constraint equation.

The constrained optimization problem with n variables and k constraints is transformed into an unconstrained optimization problem with (n+k) variables. Starting from the mathematical sense, Lagrange multiplier method establishes extreme conditions by introducing Lagrange multipliers, takes partial derivatives of n variables corresponding to n equations, and adds k constraints (corresponding to k Lagrange multipliers) to form (n+k) equations with (n+k) variables.