Current location - Training Enrollment Network - Mathematics courses - Convex optimization problem
Convex optimization problem
Both convex sets and convex functions pave the way for solving convex optimization problems. Of course, before this, we should also have a general understanding of the concept system of the whole optimization problem.

A standard optimization problem usually consists of optimization variables, objective functions, inequality constraints and equality constraints. Values that satisfy equality constraints and inequality constraints are called feasible sets of optimization problems. The feasible domain should also be included in the definition domain of all functions.

We define the lower bound in the feasible region as the optimal value of the problem. If the feasible domain is an empty set, we take.

If the optimal value can be obtained in the feasible region of the problem, it is called the optimal point of the problem. Note that not all problems can get the optimal solution. All the best points constitute an optimal set.

If the optimal set of optimization problems is not empty, then even if the problems are solvable, it is easy to see that not all optimization problems with non-empty feasible regions are solvable.

For inequality constraints, if, we say that the constraint is. Whether inequality constraints are effective or not has been widely studied in optimization theory.

This paper gives the standard form of the optimization problem (1). In fact, many practical problems are not in the standard form when they are put forward, but we can always turn them into the standard form.

For example, adding a negative sign to the maximum problem can become a minimum problem. Because convex functions have global minima, we are used to considering minima.

Through certain transformation, we can transform an optimization problem into another optimization problem which is equivalent to it. Sometimes, such an operation can help us to simplify the solution of the problem.

Generally speaking, the convex optimization problem is the optimization problem that the objective function is convex and the feasible region is convex set. Compared with general optimization problems, the standard form of convex optimization problems requires that the objective function and inequality constraint function are convex and the equality constraint is linear.

Such a constraint ensures that the feasible region of the problem is a convex set.

If the objective function is not convex but quasi-convex, then this problem is a quasi-convex optimization problem.

When the objective function and inequality constraints both become concave functions and seek the maximum value, this problem is called concave optimization problem. The essence of concave optimization problem and convex optimization problem is the same.

Compared with general optimization problems, convex optimization has a very good property, that is, any local optimization is global optimization.

If the objective function is differentiable, there is also a criterion for judging the best point:

Make it feasible and optimal.

This proposition has a good geometric explanation: sum forms an obtuse angle.

At the same time, the supporting hyperplane of feasible region is defined.

If the problem only contains equality constraints, then it is optimal. This can be proved by the KKT condition introduced later.

If the problem is only a non-negative constraint of variables, then it is optimal.

If the convex optimization problem has no unconstrained problem, then the above proposition comes down to a well-known sufficient condition:

Many practical problems can be reduced or transformed into several classical convex optimization problems. Including linear programming (LP), quadratic programming (QP), quadratic constrained quadratic programming (QCQP), second-order cone programming (SOCP) and geometric programming (GP), and then introduce them in turn.

Linear programming should be the simplest and most familiar convex optimization problem. Linear programming problems have the following typical forms: through some transformations, such as adding slack variables and introducing positive and negative parts, they can be transformed into standard forms. For the standard linear programming problem, the simplex method should be introduced into the undergraduate course of operational research. This is a solution method based on the characteristics of feasible region of linear programming, because the optimal value of linear programming must take the pole of feasible region if it exists.

There are several problems that can be transformed into LP problems.

Given a polyhedron, we want to know the radius and center of the largest sphere that this polyhedron can contain. This spherical center is called Chebyshev center of polyhedron.

Let's assume that this ball is, if this ball is on a half plane, then there must be: finally, we get the corresponding LP problem: it is a variable of LP problem.

If the objective function of linear programming is not linear function, but linear fractional function, this problem becomes linear fractional programming. It can also be transformed into linear programming.

First, make a substitution: substitute the above formula into the constraint conditions, and it will be smoothly transformed into linear programming.

When the objective function in LP is quadratic, this problem becomes quadratic programming. Note that at this point, the constraint still needs to be linear.

If the function in the inequality constraint condition becomes a quadratic function again, then this is quadratic constrained quadratic programming.

They all have a standard form:

It should be noted that such a quadratic programming problem requires that the matrix be at least semi-positive.

Some problems can be solved by QP:

It must be semi-positive definite. This is an unconstrained QP problem.

Two polyhedrons are summed to find the minimum distance between them. This is also a QP problem.

This is also a classic QP problem, which is omitted here.

Second-order cone programming has a typical form: at first glance, if inequality constrains both sides to square at the same time, it can become QCQP. In fact, SOCP can be regarded as a generalization of QCQP.

Robust linear optimization on ellipsoid uncertain set and linear chance constraint of Gaussian distribution are finally transformed into SOCP.

Geometric programming is a non-convex problem, which can be transformed into a convex optimization problem. Before introducing GP, we still need to clarify some concepts. We call it a positive term, and the sum of several monomials is called a positive term.

A standard GP problem has the following form:

They are all in. Obviously, the monomial is not necessarily convex, and this is not a convex optimization problem. For exchange and SGD, the original question has the following form:

If the objective function and inequality constraints of GP are both monomials, we can also convert them into LP by substitution. So LP can also be regarded as a special case of GP.

At the end of the previous chapter, we successfully extended convex functions to vector-valued functions through generalized inequalities. We call it the standard form of convex optimization problems under generalized inequality constraints. Just like the requirements of general convex optimization problems, it is also required to be true in the world.

The beauty of mathematics is that it can unify many seemingly unrelated problems abstractly with subtle theories. As we will see, the concept of true cone is like a stroke of genius, which unifies the whole optimization theory.

The optimization problem of cone form is a very common situation. In mathematics, we think that general conclusions are better than specific conclusions. Cone programming is a representation method, which abstracts the forms of many classical optimization problems.

Cone programming generally has the following typical forms: linear programming is obviously a special case of cone programming.

SOCP can be expressed in the form of cone programming: where is the second-order cone:.

Semidefinite programming (SDP) is a very, very important convex optimization problem! On the basis of conic problem, let it be a semi-positive definite matrix cone, because it can be regarded as a linear function. Then, we have this standard form called SDP. SDP also has the following forms: Regarding the linear inequality of matrices, we call it linear matrix inequality, which is abbreviated as LMI in many literatures.

Generalized inequalities can act not only on constraints, but also on objective functions. Let it be a multivariate vector-valued function. On the convex set of, we hope to find the minimum/minimum under a generalized inequality.

Such a problem is called a vector optimization problem. The purpose of this is that if the target is multidimensional, it can be represented by defining an appropriate cone, at least not worse.