Skip to main content
Engineering LibreTexts

14.1: Quadratic Lyapunov Functions for LTI Systems

  • Page ID
    24319
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Consider the continuous-time system

    \[\dot{x}(t)=A x(t). \ \tag{14.1}\]

    We have already established that the system (14.1) is asymptotically stable if and only if all the eigenvalues of \(A\) are in the open left half plane. In this section we will show that this result can be inferred from Lyapunov theory. Moreover, it will be shown that quadratic Lyapunov functions suffice. A consequence of this is that stability can be assessed by methods that may be computationally simpler than eigenanalysis. More importantly, quadratic Lyapunov functions and the associated mathematics turn up in a variety of other problems, so they are worth mastering in the context of stability evaluation.

    Quadratic Positive- Definite Functions

    Consider the function

    \[V(x)=x^{T} P x, \quad x \in \mathbb{R}^{n}\nonumber\]

    where \(P\) is a symmetric matrix. This is the general form of a quadratic function in \(\mathbb{R}^{n}\). It is sufficient to consider symmetric matrices; if \(P\) is not symmetric, we can define \(P_{1}=\frac{1}{2}\left(P+P^{T}\right)\). It follows immediately that \(x^{T} P x=x^{T} P_{1} x\) (verify, using the fact that \(x^{T} P x\)is a scalar).

    Proposition 14.1

    V (x) is a positive definite function if and only if all the eigenvalues of \(P\) are positive

    Proof

    Since \(P\) is symmetric, it can be diagonalized by an orthogonal matrix, i.e.,

    \[ P = U^{T}DU \text{ with} U^{T}U =I \text{ and} D \text{ diagonal}\nonumber\]

    Then, if \(y = Ux\)

    \[V(x)=x^{T} U^{T} D U x=y^{T} D y=\sum_{i} \lambda_{i}\left|y_{i}\right|^{2}\nonumber\]

    Thus,

    \[V(x)>0 \quad \forall x \neq 0 \Leftrightarrow \lambda_{i}>0, \forall i\nonumber\]

    Definition 14.1

    A matrix \(P\) that satisfies

    \[x^{T} P x>0 \quad \forall x \neq 0 \ \tag{14.2}\]

    is called positive definite. When \(P\) is symmetric (which is usually the case of interest, for the reason mentioned above), we will denote its positive definiteness by \(P > 0\). If \(x^{T} P x \geq 0 \forall x \neq 0\), then \(P\) is positive semi-definite, which we denote in the symmetric case by \(P \geq 0\).

    For a symmetric positive definite matrix, it follows that

    \[\lambda_{\min }(P)\|x\|^{2} \leq V(x) \leq \lambda_{\max }(P)\|x\|^{2}\nonumber\]

    This inequality follows directly from the proof of Proposition 14.1.

    It is also evident from the above discussion that the singular values and eigenvalues of any positive definite matrix coincide.

    Exercise

    Show that \(P > 0\) if and only if \(P = G^{T} G\) where \(G\) is nonsingular. The matrix \(G\) is called a square root of \(P\) and is denoted by \(P^{\frac{1}{2}}\). Show that \(H\) is another square root of \(P\) if and only if \(G = WH\) for some orthogonal matrix \(W\). Can you see how to construct a symmetric square root? (You may find it helpful to begin with the eigen-decomposition \(P = U^{T} DU\), where \(U\) is orthogonal and \(D\) is diagonal.)

    Quadratic Lyapunov Functions for CT LTI Systems

    Consider defining a Lyapunov function candidate of the form

    \[V(x)=x^{T} P x, \quad P>0, \ \tag{14.3}\]

    for the system (14.1). Then

    \[\begin{aligned}
    \dot{V}(x) &=\dot{x}^{T} P x+x^{T} P \dot{x} \\
    &=x^{T} A^{T} P x+x^{T} P A x \\
    &=x^{T}\left(A^{T} P+P A\right) x \\
    &=-x^{T} Q x
    \end{aligned}\nonumber\]

    where we have introduced the notation \(Q=-\left(A^{T} P+P A\right)\); note that \(Q\) is symmetric. Now invoking the Lyapunov stability results from Lecture 5, we see that \(V\) is a Lyapunov function if \(Q \geq 0\), in which case the equilibrium point at the origin of the system (14.1) is stable i.s.L. If \(Q > 0\), then the equilibrium point at the origin is globally asymptotically stable. In this latter case, the origin must be the only equilibrium point of the system, so we typically say the system (rather than just the equilibrium point) is asymptotically stable.

    The preceding relationships show that in order to find a quadratic Lyapunov function for the system (14.1), we can pick \(Q > 0\) and then try to solve the equation

    \[A^{T} P+P A=-Q \ \tag{14.4}\]

    for \(P\). This equation is referred to as a Lyapunov equation, and is a linear system of equations in the entries of \(P\). If it has a solution, then it has a symmetric solution (show this!), so we only consider symmetric solutions. If it has a positive definite solution \(P > 0\), then we evidently have a Lyapunov function \(x^{T} P x\) that will allow us to prove the asymptotic stability of the system (14.1). The interesting thing about LTI systems is that the converse also holds: If the system is asymptotically stable, then the Lyapunov equation (14.4) has positive definite solution \(P > 0\) (which, as we shall show, is unique). This result is stated and proved in the following theorem.

    Theorem 14.1

    Given the dynamic system (14.1) and any \(Q > 0\), there exists a positive definite solution \(P\) of the Lyapunov equation

    \[A^{T} P+P A=-Q\nonumber\]

    if and only if all the eigenvalues of \(A\) are in the open left half plane (OLHP). The solution \(P\) in this case is unique.

    Proof

    If \(P > 0\) is a solution of (14.4), then \(V (x) = x^{T} P x\) is a Lyapunov function of system (14.1) with \(\dot{V} (x) < 0\) for any \(x \neq 0\). Hence, system (14.1) is (globally) asymptotically stable and thus the eigenvalues of \(A\) are in the OLHP.

    To prove the converse, suppose \(A\) has all eigenvalues in the OLHP, and \(Q > 0\) is given. Define the symmetric matrix \(P\) by

    \[P=\int_{0}^{\infty} e^{t A^{T}} Q e^{t A} d t. \ \tag{14.5}\]

    This integral is well defined because the integrand decays exponentially to the origin, since the eigenvalues of \(A\) are in the OLHP. Now

    \[\begin{aligned}
    A^{T} P+P A &=\int_{0}^{\infty} A^{T} e^{t A^{T}} Q e^{t A} d t+\int_{0}^{\infty} e^{t A^{T}} Q e^{t A} A d t \\
    &=\int_{0}^{\infty} \frac{d}{d t}\left[e^{t A^{T}} Q e^{t A}\right] d t \\
    &=-Q
    \end{aligned \nonumber\]

    so \(P\) satisfies the Lyapunov equation.

    To prove that \(P\) is positive definite, note that

    \[\begin{aligned}
    x^{T} P x &=\int_{0}^{\infty} x^{T} e^{t A^{T}} Q e^{t A} x d t \\
    &=\int_{0}^{\infty}\left\|Q^{\frac{1}{2}} e^{t A} x\right\|^{2} d t \geq 0
    \end{aligned}\nonumber\]

    and

    \[x^{T} P x=0 \Rightarrow Q^{\frac{1}{2}} e^{t A} x=0 \Rightarrow x=0\nonumber\]

    where \(Q^{\frac{1}{2}}\) denotes a square root of \(Q\). Hence \(P\) is positive definite.

    To prove that the \(P\) defined in (14.5) is the unique solution to (14.4) when \(A\) has all eigenvalues in the OLHP, suppose that P2 is another solution. Then

    \[\begin{aligned}
    P_{2} &=-\int_{0}^{\infty} \frac{d}{d t}\left[e^{t A^{T}} P_{2} e^{t A}\right] d t \quad(\text { verify this identity }) \\
    &=-\int_{0}^{\infty} e^{t A^{T}}\left(A^{T} P_{2}+P_{2} A\right) e^{t A} d t \\
    &=\int_{0}^{\infty} e^{t A^{T}} Q e^{t A} d t=P
    \end{aligned}\nonumber\]

    This completes the proof of the theorem.

    A variety of generalizations of this theorem are known.

    Quadratic Lyapunov Functions for DT LTI Systems

    Consider the system

    \[x(t+1)=A x(t)=f(x(t)) \ \tag{14.6}\]

    If

    \[V(x)=x^{T} P x, \nonumber\]

    then

    \[\dot{V}(x) \triangleq V(f(x))-V(x)=x^{T} A^{T} P A x-x^{T} P x. \nonumber\]

    This the resulting Lyapunov equation to study is

    \[A^{T} P A-P=-Q. \ \tag{14.7}\]

    The following theorem is analogous to what we proved in the CT case, and we leave its proof as an exercise.

    Theorem 14.2

    Given the dynamic system (14.6) and any \(Q > 0\), there exists a positive definite solution \(P\) of the Lyapunov equation

    \[A^{T} P A+P=-Q\nonumber\]

    if and only if all the eigenvalues of \(A\) have magnitude less than 1 (i.e. are in the open unit disc). The solution \(P\) in this case is unique.

    Example 14.1 Differential Inclusion

    In many situations, the evolution of a dynamic system can be uncertain. One way of modeling this uncertainty is by differential (difference) inclusion which can be described as follows:

    \[\dot{x}(t) \subset\{A x(t) \mid A \subset \mathcal{A}\}\nonumber\]

    where \(A\) is a set of matrices. Consider the case where \(A\) is a finite set of matrices and their convex combinations:

    \[\mathcal{A}=\left\{A=\sum_{i=1}^{m} \alpha_{i} A_{i} \mid \sum_{i=1}^{m} \alpha_{i}=1\right\}\nonumber\]

    One way to guarantee the stability of this system is to find one Lyapunov function for all systems defined by \(\mathcal{A}\). If we look for a quadratic Lyapunov function, then it suffices to find a \(P\) that satisfies:

    \[A_{i}^{T} P+P A_{i}<-Q, \quad i=1,2, \ldots m\nonumber\]

    for some positive definite \(Q\). Then \(V (x) = x^{T} P x\) satisfies \(\dot{V}(x)<-x^{T} Q x\) (verify) showing that the system is asymptotically stable.

    Example 14.2 Set of Bounded Norm

    In this problem, we are interested in studying the stability of linear time-invariant systems of the form \(\dot{x}(t)=(A+\Delta) x(t)\) where \(\Delta\) is a real matrix perturbation with bounded norm. In particular, we are interested in calculating a good bound on the size of the smallest perturbation that will destabilize a stable matrix \(A\).

    This problem can be cast as a differntial inclusion problem as in the previous example with

    \[\mathcal{A}=\{A+\Delta\|\| \Delta \| \leq \gamma, \Delta \text { is a real matrix }\}\nonumber\]

    Since \(A\) is stable, we can calculate a quadratic Lyapunov function with a matrix \(P\) satisfying \(A^{T} P + P A < -Q\) and \(Q\) is positive definite. Applying the same Lyapunov function to the perturbed system we get:

    \[\dot{V}(x)=x^{T}\left(A^{T} P+P A+\Delta^{T} P+P \Delta\right) x\nonumber\]

    It is evident that all perturbations satisfying

    \[\Delta^{T} P+P \Delta<Q\nonumber\]

    will result in a stable system. This can be guaranteed if

    \[2 \sigma_{\max }(P) \sigma_{\max }(\Delta)<\sigma_{\min }\tag{Q}\]

    This provides a bound on the perturbation although it is potentially conservative.

    Example 14.3 Bounded Perturbation

    Casting the perturbation in the previous example in terms of differential inclusion introduces a degree of conservatism in that the value \(\Delta\) takes can change as a function of time. Consider the system:

    \[\dot{x}(t)=(A+\Delta) x\tag{t}\]

    where \(A\) is a known fixed stable matrix and \(\Delta\) is an unknown fixed real perturbation matrix. The stability margin of this system is defined as

    \[\gamma(A)=\min _{\Delta \in \mathbb{R}^{n \times n}}\{\|\Delta\| \mid A+\Delta \text { is unstable }\}\nonumber\]

    We desire to compute a good lower bound on \(\gamma(A)\). The previous example gave one such bound.

    First, it is easy to argue that the minimizing solution \(\Delta_{0}\) of the above problem results in \(A + \Delta_{0}\) having eigenvalues at the imaginary axis (either at the origin, or in two complex conjugate locations). This is a consequence of the fact that the eigenvalues of \(A + p\Delta_{0}\) will move continuously in the complex plane as the parameter \(p\) varies from 0 to 1. The intersection with the imaginary axis will happen at \(p = 1\); if not, a perturbation of smaller size can be found.

    We can get a lower bound on \(gamma\) by dropping the condition that \(\Delta\) is a real matrix, and allowing complex matrices (is it clear why this gives a lower bound?). We can show:

    \[\min _{\Delta \in \mathbf{C}^{n \times n}}\{\|\Delta\| \mid A+\Delta \text { is unstable }\}=\min _{\omega \in \mathbb{R}} \sigma_{\min }(A-j \omega I)\nonumber\]

    To verify this, notice that if the minimizing solution has an eigenvalue at the imaginary axis, then \(j \omega_{0} I-A-\Delta_{0}\) should be singular while we know that \(j \omega_{0} A\) is not. The smallest possible perturbation that achieves this has size \(\sigma_{\min }\left(A-j \omega_{0} I\right)\). We can then choose \(\omega_{0}\) that gives the smallest possible size. In the exercises, we further improve this bound.


    This page titled 14.1: Quadratic Lyapunov Functions for LTI Systems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mohammed Dahleh, Munther A. Dahleh, and George Verghese (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.