Skip to main content
Engineering LibreTexts

11.3: Exercises

  • Page ID
    24300
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Exercise 11.1 Companion Matrices

    (a) The following two matrices and their transposes are said to be companion matrices of the polynomial \(q(z)=z^{n}+q_{n-1} z^{n-1}+\ldots+q_{0}\). Determine the characteristic polynomials of these four matrices, and hence explain the origin of the name. (Hint: First find explanations for why all four matrices must have the same characteristic polynomial, then determine the characteristic polynomial of any one of them.)

    \[A_{1}=\left(\begin{array}{ccccc}
    -q_{n-1} & 1 & 0 & \ldots & 0 \\
    -q_{n-2} & 0 & 1 & \ldots & 0 \\
    \vdots & \vdots & \vdots & \ddots & \vdots \\
    -q_{1} & 0 & 0 & \ldots & 1 \\
    -q_{0} & 0 & 0 & \ldots & 0
    \end{array}\right) \quad A_{2}=\left(\begin{array}{ccccc}
    0 & 1 & 0 & \ldots & 0 \\
    0 & 0 & 1 & \ldots & 0 \\
    \vdots & \vdots & \vdots & \ddots & \vdots \\
    0 & 0 & 0 & \ldots & 1 \\
    -q_{0} & -q_{1} & -q_{2} & \ldots & -q_{n-1}
    \end{array}\right)\nonumber\]

    (b) Show that the matrix \(A_{2}\) above has only one (right) eigenvector for each distinct eigenvalue \(\lambda_{i}\), and that this eigenvector is of the form \(\left[\begin{array}{lllll}
    1 & \lambda_{i} & \lambda_{i}^{2} & \ldots & \lambda_{i}^{n-1}
    \end{array}\right]^{T}\).

    (c) If

    \[A=\left(\begin{array}{ccc}
    0 & 1 & 0 \\
    0 & 0 & 1 \\
    6 & 5 & -2
    \end{array}\right)\nonumber\]

    what are \(A^{k}\) and \(e^{At}\) (Your answers may be left as a product of three - or fewer - matrices; do not bother to multiply them out.)

    Exercise 11.2

    Suppose you are given the state-space equation

    \[\dot{x}(t)=A x(t)+B u\tag{t}\]

    with an input \(u(t)\) that is piecewise constant over intervals of length \(T\) :

    \[u(t)=u[k], \quad k T<t \leq(k+1) T\nonumber\]

    (a) Show that the sampled state \(x[k] = x(kT )\) is governed by a sampled-data state-space model of the form

    \[x[k+1]=F x[k]+G u[k]\nonumber\]

    for constant matrices \(F\) and \(G\) (i.e. matrices that do not depend on \(t\) or \(k\)), and determine these matrices in terms of \(A\) and \(B\). (Hint: The result will involve the matrix exponential, \(e^{At}\).) How are the eigenvalues and eigenvectors of \(F\) related to those of \(A\)?

    (b) Compute \(F\) and \(G\) in the above discrete-time sampled-data model when

    \[A=\left(\begin{array}{cc}
    0 & 1 \\
    -\omega_{0}^{2} & 0
    \end{array}\right), \quad B=\left(\begin{array}{l}
    0 \\
    1
    \end{array}\right)\nonumber\]

    (c) Suppose we implement a state feedback control law of the form \(u[k] = Hx[k]\), where \(H\) is a gain matrix. What choice of \(H\) will cause the state of the resulting closed-loop system, \(x[k + 1] = (F + GH)x[k]\), to go to 0 in at most two steps, from any initial condition (\(H\) is then said to produce "deadbeat" behavior)? To simplify the notation for your calculations, denote \(\cos \omega_{0} T\) by \(c\) and \(\sin \omega_{0} T\) by \(s\). Assume now that \(\omega_{0} T=\pi / 6\), and check your result by substituting in your computed \(H\) and seeing if it does what you intended.

    (d) For \(\omega_{0} T=\pi / 6\) and \(\omega_{0} T=1\), your matrices from (b) should work out to be

    \[F=\left(\begin{array}{ll}
    \sqrt{3} / 2 & 1 / 2 \\
    -1 / 2 & \sqrt{3} / 2
    \end{array}\right), \quad G=\left(\begin{array}{c}
    1-(\sqrt{3} / 2) \\
    1 / 2
    \end{array}\right)\nonumber\]

    Use Matlab to compute and plot the response of each of the state variables from \(k = 0\) to \(k = 10\), assuming \(x[0]=[4,0]^{T}\) and with the following choices for \(u[k]\):

    • (i) the open-loop system, with \(u[k] = 0\);
    • (ii) the closed-loop system with \(u[k] = Hx[k]\), where \(H\) is the feedback gain you computed in (c), with \(\omega_{0} T=1\) also plot \(u[k]\) in this case.

    (e) Now suppose the controller is computer-based. The above control law \(u[k] = Hx[k]\) is imple- mentable if the time taken to compute \(Hx[k]\) is negligible compared to \(T\). Often, however, it takes a considerable fraction of the sampling interval to do this computation, so the control that is applied to the system at time \(k\) is forced to use the state measurement at the previous instant. Suppose therefore that \(u[k] = Hx[k - 1]\). Find a state-space model for the closed-loop system in this case, written in terms of \(F\), \(G\), and \(H\). (Hint: The computer-based controller now has memory!) What are the eigenvalues of the closed-loop system now, with \(H\) as in (c)? Again use Matlab to plot the response of the system to the same initial condition as in (d), and compare with the results in (d)(ii). Is there another choice of \(H\) that could yield deadbeat behavior? If so, find it; if not, suggest how to modify the control law to obtain deadbeat behavior.

    Exercise 11.3

    Given the matrix

    \[A=\left[\begin{array}{cc}
    \sigma & \omega \\
    -\omega & \sigma
    \end{array}\right]\nonumber\]

    show that

    \[\exp \left(\begin{array}{ll}
    \left.t\left[\begin{array}{cc}
    \sigma & \omega \\
    -\omega & \sigma
    \end{array}\right]\right)=\left[\begin{array}{cc}
    e^{\sigma t} \cos (\omega t) & e^{\sigma t} \sin (\omega t) \\
    -e^{\sigma t} \sin (\omega t) & e^{\sigma t} \cos (\omega t)
    \end{array}\right]
    \end{array}\right.\nonumber\]

    Exercise 11.4

    Suppose A and B are constant square matrices. Show that

    \[\exp \left(t\left[\begin{array}{cc}
    A & 0 \\
    0 & B
    \end{array}\right]\right)=\left[\begin{array}{cc}
    e^{t A} & 0 \\
    0 & e^{t B}
    \end{array}\right]\nonumber\]

    Exercise 11.5

    Suppose \(A\) and \(B\) are constant square matrices. Show that the solution of the following system of differential equations,

    \[\dot{x}(t)=e^{-t A} B e^{t A} x\tag{t}\]

    is given by

    \[x(t)=e^{-t A} e^{\left(t-t_{0}\right)(A+B)} e^{t_{0} A} x\left(t_{0}\right)\nonumber\]

    Exercise 11.6

    Suppose \(A\) is a constant square matrix, and \(f (t)\) is a continuous scalar function of \(t\). Show that the state transition matrix for the system

    \[\dot{x}(t)=f(t) A x\tag{t}\]

    is given by

    \[\Phi\left(t, t_{0}\right)=\exp \left(\left(\int_{t_{0}}^{t} f(\tau) d \tau\right) A\right)\nonumber\]

    Exercise 11.7 (Floquet Theory)

    Consider the system

    \[\dot{x}(t)=A(t) x\tag{t}\]

    where \(A(t)\) is a periodic matrix with period \(T\), so \(A(t + T ) = A(t)\). We want to study the state transition matrix \(\Phi(t, t_{0})\) associated with this periodically time-varying system.

    1. First let us start with the state transition matrix \(\Phi(t, 0)\), which satisfies

    \[\begin{aligned}
    \dot{\Phi} &=A(t) \Phi \\
    \Phi(0,0) &=I
    \end{aligned}\nonumber\]

    Define the matrix \(\Psi(t, 0)=\Phi(t+T, 0)\) and show that \(\Psi\) satisfies

    \[\begin{aligned}
    \dot{\Psi}(t, 0) &=A(t) \Psi(t, 0) \\
    \Psi(0,0) &=\Phi(T, 0)
    \end{aligned}\nonumber\]

    2. Show that this implies that \(\Phi(t+T, 0)=\Phi(t, 0) \Phi(T, 0)\).

    3. Using Jacobi-Liouville formula, show that \(\Phi(T, 0)\) is invertible and therefore can be written as \(\Phi(T, 0)=e^{TR}\).

    4. Define

    \[P(t)^{-1}=\Phi(t, 0) e^{-t R}\nonumber\]

    and show that \(P(t)^{-1}\), and consequently \(P (t)\), are periodic with period \(T\). Also show that \(P (T ) = I\). This means that

    \[\Phi(t, 0)=P(t)^{-1} e^{t R}\nonumber\]

    5. Show that \(\Phi\left(0, t_{0}\right)=\Phi^{-1}\left(t_{0}, 0\right)\). Using the fact that \(\Phi\left(t, t_{0}\right)=\Phi(t, 0) \Phi\left(0, t_{0}\right)\), show that

    \[\Phi\left(t, t_{0}\right)=P(t)^{-1} e^{\left(t-t_{0}\right) R} P\left(t_{0}\right)\nonumber\]

    What is the significance of this result?


    This page titled 11.3: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mohammed Dahleh, Munther A. Dahleh, and George Verghese (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.