Current location - Training Enrollment Network - Mathematics courses - How to analyze nonlinear control system with Lyapunov method
How to analyze nonlinear control system with Lyapunov method
I wonder if the following will help you:

Lyapunov stability analysis

4. 1 overview

There are many methods to analyze the stability of linear time-invariant systems. However, for nonlinear systems and linear time-varying systems, these stability analysis methods may be difficult or even impossible to realize. Lyapunov stability analysis is a general method to solve the stability problem of nonlinear systems.

More than one hundred years ago (1892), the great Russian mathematical mechanic A.M. Lyapunov (1857- 19 18) creatively published his doctoral thesis "General Stability of Motion" with his genius and meticulous research.

In this historic work, Lyapunov studied the equilibrium state and its stability, motion and its stability, and the stability of perturbation equation, and obtained that the stability of a given motion (including equilibrium state) of the system is equivalent to the stability of the origin (or zero solution) of perturbation equation of a given motion (including equilibrium state).

On this basis, Lyapunov put forward two methods to solve the stability problem, namely Lyapunov first method and Lyapunov second method.

In the first method, the motion stability is analyzed by solving differential equations, that is, the stability of the original nonlinear system is judged by analyzing the eigenvalue distribution of the linearization equation of the nonlinear system;

The second law is a qualitative method, which does not need to solve the difficult nonlinear differential equation, but constructs a Lyapunov function, studies its positive definiteness and its negative or semi-negative definiteness to the derivative of the system equation solution with time, and obtains the conclusion of stability. This method is widely used in academic circles and has far-reaching influence. Usually, what we call Lyapunov method is Lyapunov's second method.

Although Lyapunov stability theory plays an important role in the stability analysis of nonlinear systems, it cannot directly determine the stability of many nonlinear systems. Skills and experience are very important in solving nonlinear problems. In this chapter, the stability analysis of practical nonlinear systems is limited to several simple cases.

Section 4. 1 of this chapter is an overview. Section 4.2 introduces the definition of stability in the sense of Lyapunov. Section 4.3 gives Lyapunov stability theorem and applies it to the stability analysis of nonlinear systems. Section 4.4 discusses Lyapunov stability analysis of linear time-invariant systems.

4.2 Lyapunov sense of stability

For a given control system, stability analysis is usually the most important. If the system is linear, there are many stability criteria, such as Routh-leonid hurwicz stability criterion and Nyquist stability criterion. However, if the system is nonlinear or linear time-varying, the above stability criteria will no longer be applicable.

Lyapunov second method (also called Lyapunov direct method) introduced in this section is the most common method to determine nonlinear systems and linear time-varying systems. Of course, this method can also be applied to the stability analysis of linear time-invariant systems. In addition, it can also be applied to many problems such as linear quadratic optimal control.

4.2. 1 equilibrium state, giving the origin of motion and disturbance equations.

Consider the following nonlinear system

(4. 1)

Where is the dimensional state vector, which is an n-dimensional vector function of variables,,, and t, assuming that equation (4. 1) has a unique solution under given initial conditions, then.

In the system of formula (4. 1), it always exists.

, for all t (4.2)

It is called the equilibrium state or equilibrium point of the system. If the system is linear invariant, that is, when a is a nonsingular matrix, the system has a unique equilibrium state. When a is a singular matrix, the system will have an infinite number of equilibrium states. For a nonlinear system, there will be one or more equilibrium states corresponding to the constant solution of the system (for all T, there will always be). The determination of the equilibrium state does not include the solution of the system differential equation of formula (4. 1), but only involves the solution of formula (4.2).

Any isolated equilibrium state (isolated equilibrium state) or given motion can be unified to the coordinate origin of the disturbance equation through coordinate transformation, that is, or. In this chapter, unless otherwise specified, we will only discuss the stability of the perturbation equation about the equilibrium state of the origin. This so-called "origin stability problem" greatly simplifies the problem without losing generality, thus laying a solid foundation for the establishment of stability theory, which is one of Lyapunov's theories.

4.2.2 Definition of stability in the sense of Lyapunov

Firstly, the definition of stability in the sense of Lyapunov is given, and then some necessary mathematical foundations are reviewed in order to give Lyapunov stability theorem in detail in the next section.

4. 1 Design system definition

,

The h neighborhood of the equilibrium state of is

Where is the 2 norm or Euclidean norm of the vector, i.e.

Similarly, spheres S (() and S (() can be defined accordingly.

In the neighborhood of h, if for any given, there is

(1) If each one has a correspondence, so that the trajectory from s (() does not deviate from s () when t approaches infinity, then it is said that the equilibrium state of the system of formula (4. 1) is stable in the sense of Lyapunov. Generally speaking, real numbers (related to ()) are usually related to t0. If (related to ...)

The above definition means: firstly, choose a sphere S (), and each S () must have a sphere S () corresponding to it, so that when t approaches infinity, the trajectory starting from S () will never leave the sphere S ().

(2) If the equilibrium state is stable in the sense of Lyapunov, and any trajectory starting from the domain S (() does not leave S () when time t approaches infinity, and converges to, then the equilibrium state of the system in formula (4. 1) is said to be asymptotically stable, in which the spherical domain S (() is called the attraction domain of the equilibrium state.

Similarly, if (regardless of t0), the equilibrium state is uniformly asymptotically stable.

In fact, asymptotic stability is more important than stability in the sense of Lyapunov. Considering that the asymptotic stability of a nonlinear system is a local concept, simply judging the asymptotic stability does not mean that the system can work normally. It is usually necessary to determine the maximum range or attractive region of asymptotic stability, which is the part of the state space where the asymptotically stable trajectory appears. In other words, every trajectory that appears in the attractive region is asymptotically stable.

(3) For all states (all points in the state space), if the trajectories from these states are asymptotically stable, the equilibrium state is said to be asymptotically stable in a large range. In other words, if the attractor of the asymptotic stability of the equilibrium state of the system in formula (4. 1) is the whole state space, the equilibrium state of the system is said to be asymptotically stable in a large range. Obviously, the necessary condition for asymptotic stability in a large range is that there is only one equilibrium point in the whole state space.

In control engineering problems, it is always hoped that the system will be asymptotically stable in a large range. If the equilibrium state is not asymptotically stable in a large range, then the problem is transformed into determining the maximum range or attractive region of asymptotic stability, which is usually very difficult. However, for all practical problems, it is enough to determine an asymptotically stable attraction domain, so that the disturbance will not exceed it.

(4) If for real numbers (>; 0 and any real number (> 0), no matter how small these two real numbers are, there is always a state in S ((), so that the trajectory from this state will eventually be separated from S ((), then this equilibrium state is called instability.

Fig. 4. 1(a), (b) and (c) respectively represent the typical trajectories corresponding to the equilibrium state and stability, asymptotic stability and instability. In Figure 4. 1(a), (b) and (c), the sphere S (() restricts the initial state, and the sphere S (() is

Note that, since the above definitions can't explain in detail the exact attractive region that allows the initial conditions, unless S (() corresponds to the whole state plane, these definitions can only be applied to the neighborhood of the equilibrium state.

In addition, in Figure 4. 1(c), the trajectory leaves S ((), indicating that the equilibrium state is unstable. However, it cannot be explained that the trajectory will tend to infinity, because the trajectory may also tend to a limit cycle outside S (()) (if the linear time-invariant system is unstable, the trajectory starting near the unstable equilibrium state will tend to infinity. But in a nonlinear system,

Fig. 4. 1 (a) stable equilibrium state and typical trajectory; (b) Asymptotically stable equilibrium state and typical trajectory; (c) unstable equilibrium state and typical trajectory

The above definition is the minimum requirement for understanding the stability analysis of linear and nonlinear systems introduced in this chapter. Note that these definitions are not the only way to determine the concept of equilibrium stability. In fact, there are other definitions in other documents.

For linear systems, asymptotic stability is equivalent to large-scale asymptotic stability, while for nonlinear systems, only the asymptotic stability of finite attraction region is generally considered.

Finally, it is pointed out that in the classical control theory, we have learned the concept of stability, which is different from the stability in the sense of Lyapunov. For example, in classical control theory, only asymptotically stable systems are called stable systems, while only systems that are stable in the sense of Lyapunov, but not asymptotically stable, are called unstable systems. The following table shows the differences and connections between them.

Table 4. 1 The concept of stability of linear systems and the concept of stability in the sense of Lyapunov

Classical control theory (linear system)

Unstable (re (s) >; 0)

Critical situation (Re = 0)

Stability (re)

In the sense of Lyapunov

changeful

stable

Asymptotic stability

propaedeutics

1, positive definiteness of scalar function

If at exists for all non-zero states in the domain, then scalar functions in the domain (including the origin of the state space) are called positive definite functions.

If the time-varying function takes a constant positive definite function as the lower bound, there is a positive definite function, which makes

, for everyone

, for everyone

Then the time-varying function is said to be positive definite in the definition domain (including the origin of the state space).

2. Negative definiteness of scalar function

If-is a positive definite function, a scalar function is called a negative definite function.

3. Semi-positive definite form of scalar function

If all states of a scalar function are positive definite except the origin, and some states are equal to zero, it is called semi-positive definite scalar function.

4. Negative semi-qualitative scalar function

If-is a semi-positive definite function, a scalar function is called a semi-negative definite function.

5. Uncertainty of scalar function

No matter how small the domain is, if a scalar function can be positive or negative, it is called indefinite scalar function.

-

[Example 4. 1] This example gives several scalar functions according to the above classification, assuming that X is a two-dimensional vector.

1, positive definite

2, semi-positive definite

3. Negative affirmation

4. Uncertainty

5. Positive definite

-

6. Quadratic type

In the stability analysis based on Lyapunov's second method, a kind of scalar function plays a very important role, that is, quadratic function. For example,

Note that here is a real vector and a real symmetric matrix.

7, complex quadratic or Hermite type

If it is a complex vector of one dimension and an Hermite matrix, the complex quadratic function is called Hermite type function. For example,

In the stability analysis based on state space, Hermite type is often used instead of quadratic type, because Hermite type is more general than quadratic type (for real vector X and real symmetric matrix P, Hermite type is equal to quadratic type).

The positive definiteness of quadratic form or Hermite form can be judged by Sylvester criterion, which points out that the necessary and sufficient condition for positive definiteness of quadratic form or Hermite form is that all the principal determinants of matrix P are positive, that is

Note that it is a compound yoke. For quadratic form,.

If p is a singular matrix and all its principal and subordinate determinants are nonnegative, it is semi-positive.

If-is positive definite, it is negative definite. Similarly, if-is semi-positive, it is semi-negative.

-

Try to prove that the following quadratic form is positive definite.

[Solution] The quadratic form can be written as

Using the Sylvester standard, we can get

Because all the principal determinants and subordinate determinants of matrix P are positive, it is positive definite.

-

4.3 Lyapunov stability theory

In 1892, A.M. Lyapunov proposed two methods (called the first method and the second method) to determine the stability of dynamic systems described by ordinary differential equations.

The first method includes all steps of system analysis using explicit solutions of differential equations, which is also called indirect method.

The second method does not need to solve differential equations, that is, Lyapunov's second method can determine the stability of the system without solving state equations. Because it is usually very difficult to solve the state equations of nonlinear systems and linear time-varying systems, this method shows great advantages. The second method is also called direct method.

Although it takes considerable experience and skill to analyze the stability of nonlinear systems with Lyapunov's second method, this method can solve the stability problem of nonlinear systems when other methods are ineffective.

Lyapunov's first method

The basic idea is to linearize the nonlinear system, then calculate the eigenvalue of the linearized equation, and finally judge the stability of the original nonlinear system according to the eigenvalue of the linearized equation.

Let the state equation of the nonlinear system be

,

Or write

If the nonlinear function is expanded into Taylor series near the equilibrium state, there are

Where is the constant, the coefficient of the first order term and the sum of all the higher order terms.

Because the linearization equation is

In ...

It's a Jacobian matrix.

Linearization equation (ignoring high-order small quantity) is a very important and widely used approximate analysis method. This is because in engineering technology, many systems are essentially nonlinear, and nonlinear systems are very difficult to solve, so they are often approximated by linear systems.

But whether it is correct or not, we know that there are essential differences between linear (chemical) systems and nonlinear systems, such as self-oscillation, mutation, self-organization and chaos, so generally speaking, the solutions and related conclusions about linear systems cannot be generalized to the original nonlinearity at will. Now we narrow the scope of the problem, only consider the stability problem, and put forward the conditions under which Lyapunov, a linear system, replaces the original nonlinear system, prove three theorems and give a clear conclusion. It should be pointed out that these theorems have laid a theoretical foundation for linearization methods, so they have important theoretical and practical significance.

Theorem 4. 1 (Lyapunov) If all eigenvalues of the system matrix A of a linearized system have negative real parts, the equilibrium state of the original nonlinear system is always asymptotically stable, and the stability of the system has nothing to do with the higher derivative term.

Theorem 4.2 (Lyapunov) If at least one eigenvalue of the system matrix A of a linearized system has a positive real part, the equilibrium state of the original nonlinear system is always unstable regardless of the higher derivative term.

Theorem 4.3 (Lyapunov) If the system matrix A of a linearized system has eigenvalues whose real part is zero, and all other eigenvalues are negative, then in this critical case, the stability of the equilibrium state of the original nonlinear system depends on the higher derivative term, that is, it may be unstable or stable. At this point, the stability of the original nonlinear system can no longer be characterized by linearization equations.

The above three theorems are also called Lyapunov's first approximation theorem. These theorems provide an important theoretical basis for linearization, that is, for any nonlinear system, if its linearized system is asymptotically stable or unstable relative to the equilibrium state, the original nonlinear system has the same conclusion. However, for the critical case, the higher derivative term must be considered.

Lyapunov's second method

According to the classical theory of mechanics, for a vibration system, when the total energy (positive definite function) of the system decreases continuously (which means that the derivative of the total energy to time is negative), the vibration system is stable until it is in equilibrium.

Lyapunov's second method is based on a more general meaning, that is, if the system has an asymptotically stable equilibrium state, when it moves to the attraction region of the equilibrium state, the energy stored in the system will decay with time until it reaches a minimum value in the steady state. However, for some pure mathematical systems, there is no simple way to define the "energy function". In order to overcome this difficulty, Lyapunov defined a fictitious energy function called Lyapunov function. Of course, this function is undoubtedly more general and more widely used than energy. In fact, any scalar function can be regarded as a Lyapunov function as long as it satisfies the assumptions of Lyapunov stability theorem (see Theorems 4.4 and 4.5) (its construction may be very difficult).

Lyapunov function is related to sum T, so we use or to represent Lyapunov function. If there is no t in the Lyapunov function, we use or to represent it. In Lyapunov's second method, the symbolic characteristics of its full derivative about time provide a criterion for judging the stability, asymptotic stability or instability of the equilibrium state. This indirect method does not need to directly solve the given nonlinear state equation.

1, on asymptotic stability

It can be proved that if it is a dimensional vector and its scalar function is positive definite, then it satisfies.

The state of is on the closed hypersurface of the dimensional state space and at least near the origin, where c is a normal number. At this point, the closed surface can be extended to the whole state space with. If the hypersurface is completely inside the hypersurface.

For a given system, if a positive definite scalar function can be obtained, and its full derivative along the trajectory is always negative, it will take smaller and smaller C value with the increase of time. With the further increase of time, it will eventually become zero and tend to zero. This means that the origin of state space is asymptotically stable. Lyapunov's principal stability theorem is a generalization of the above facts, which gives sufficient conditions for asymptotic stability.

Theorem 4.4 (Lyapunov, Pierre Chysky, Babasin, krassovsky) considers the following nonlinear systems.

formula

, for everyone

If there is a scalar function with continuous first-order partial derivatives and the following conditions are met:

1, positive definite;

2. Negative affirmation

The equilibrium state of origin is (uniformly) asymptotically stable.

Step by step into b, if (radial infinity), the equilibrium state of the origin is uniformly asymptotically stable in a large range.

-

[Example 4.3] Consider the following nonlinear system

Obviously, the origin (,) is the only equilibrium state. Try to determine its stability.

[Solution] If a positive definite scalar function is defined

It is negative definite, which means that it decreases continuously along any trajectory, so it is a Lyapunov function. Because it becomes infinite, according to Theorem 4.4, the equilibrium state of the system at the origin is asymptotically stable in a large range.

Note that if we take a series of constant values (), it corresponds to the origin of the state plane, and,, describe a cluster of mutually disjoint circles around the origin of the state plane, as shown in Figure 4.2. It should also be noted that this circular cluster can be extended to the whole state plane because it is radially infinite, that is, it has.

Because the circle is completely inside, the typical trajectory is through each V-circle from the outside to the inside. Therefore, one of the geometric meanings of Lyapunov function can be interpreted as a measure of the distance between state X and the origin of state space. If the distance x(t) between the origin and the instantaneous state decreases with the increase of t, that is,

Figure 4.2 Constant V Circle and Typical Trajectory

-

Theorem 4.4 is the basic theorem of Lyapunov's second method. Here are some explanations of this important theorem.

(1) Only sufficient conditions are given here, that is, if we construct a Lyapunov function, then the system is asymptotically stable. But if we can't find such a Lyapunov function, we can't give any conclusions. For example, we can't say that the system is unstable.

(2) For the asymptotically stable equilibrium state, Lyapunov function must exist.

(3) For nonlinear systems, by constructing a specific Lyapunov function, it can be proved that the system is asymptotically stable in the stable region, but this does not mean that the motion outside the stable region is unstable. For linear systems, if there is an asymptotically stable equilibrium state, then it must be asymptotically stable in a large range.

(4) The stability theorem we give here is not only applicable to linear systems and nonlinear systems, but also to time-varying systems and time-invariant systems, which has extremely universal significance.

Obviously, Theorem 4.4 still has some restrictions, such as it must be a negative definite function. If a restrictive condition is attached, that is, along any trajectory except the origin, it is not always equal to zero, then the condition that requires negative definite can be replaced by the condition that takes negative semidefinite.

Theorem 4.5 (krassovsky, Babasin) considers the following nonlinear systems.

formula

, for everyone

If there is a scalar function with continuous first-order partial derivatives and the following conditions are met:

1 is positive definite;

2, it is negative semidefinite;

3. For arbitrary and arbitrary, it is not always equal to zero in time. Here, it represents the trajectory or solution from time.

4. Yes.

Then the equilibrium state of the origin of the system is asymptotically stable in a large range.

Note that if it is not negative definite, but only negative semidefinite, the trajectory of a typical point may be tangent to a specific surface = c, but for any arbitrary point, the time constant is equal to zero, so the typical point cannot always stay at the tangent point (at this time, =0), so it needs to be moved to the origin.

Example 4.4 Time-invariant system with given continuous time

Determine its stability.

Solution: The equilibrium state of the system is.

And there are:

(i) Affirmative;

As you can see, except for the following situations.

(a) at will,

(b) at will.

Outside hours, there are all. It's half negative.

(iii) Check whether. Check (a):

Whether the perturbation solution of the system can be derived or not can be obtained by substituting it into the equation of the system.

This shows that it is not a perturbation solution of the system except point ().

Consider case (b):, and then the equation for substituting it into the system can be derived.

Contradiction is not a perturbation solution of the system.

(4) Obviously,

To sum up, the system is asymptotically stable in a large range of the equilibrium state of the origin.

2. About stability

However, if there is a positive definite scalar function that is always zero, the system can remain in the limit cycle. In this case, in the sense of Lyapunov, the equilibrium state of origin is stable.

3. About instability

If the equilibrium state of the system is unstable, there is a scalar function that can be used to determine the instability of the equilibrium state. The instability theorem is introduced below.

Theorem 4.6 (Lyapunov) considers the following nonlinear systems

formula

, for everyone

If there is a scalar function with continuous first-order partial derivatives and the following conditions are met:

1, which is positive definite in a neighborhood near the origin;

2, which is positive definite in the same neighborhood.

Then the equilibrium state of the origin is unstable.

4.3.3 Stability comparison between linear system and nonlinear system

In linear time-invariant systems, if the equilibrium state is locally asymptotically stable, it is globally asymptotically stable. However, in nonlinear systems, the equilibrium state that is not globally asymptotically stable may be locally asymptotically stable. Therefore, the significance of asymptotic stability of the equilibrium state of linear time-invariant systems is completely different from that of nonlinear systems.

If we want to test the asymptotic stability of the actual nonlinear system, it is far from enough to analyze the stability of the linear model of the nonlinear system, that is, Lyapunov's first method. We must study nonlinear systems without linearization. Therefore, there are several methods based on Lyapunov's second method to achieve this goal, such as krassovsky method, Schultz-Gibson variable gradient method, Lure' method and popov method. In the following, we will only discuss the krassovsky of nonlinear system stability analysis.