Skip to main content
Engineering LibreTexts

11.1: The Time Varying Case

  • Page ID
    24298
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Consider the \(n\)th-order continuous-time linear state-space description

    \[\begin{aligned}
    \dot{x}(t) &=A(t) x(t)+B(t) u(t) \\
    y(t) &=C(t) x(t)+D(t) u(t)
    \end{aligned} \ \tag{11.1}\]

    We shall always assume that the coefficient matrices in the above model are sufficiently well behaved for there to exist a unique solution to the state-space model for any specified initial condition \(x(t_{0})\) and any integrable input \(u(t)\). For instance, if these coefficient matrices are piecewise continuous, with a finite number of discontinuities in any finite interval, then the desired existence and uniqueness properties hold.

    We can describe the solution of (11.1) in terms of a matrix function \(\psi(t, \tau)\) that has the following two properties:

    \[\begin{aligned}
    \dot{\Phi}(t, \tau) &=A(t) \Phi(t, \tau) \ (11.2) \\
    \Phi(\tau, \tau) &=I \ \ (11.3)
    \end{aligned}\nonumber\]

    This matrix function is referred to as the state transition matrix, and under our assumption on the nature of \(A(t)\) it turns out that the state transition matrix exists and is unique.

    We will show that, given \(x(t_{0})\) and \(u(t)\),

    \[x(t)=\Phi\left(t, t_{0}\right) x\left(t_{0}\right)+\int_{t_{0}}^{t} \Phi(t, \tau) B(\tau) u(\tau) d \tau \ \tag{11.4}\]

    Observe again that, as in the DT case, the terms corresponding to the zero-input and zerostate responses are evident in (11.4). In order to verify (11.4), we differentiate it with respect to \(t\):

    \[\dot{x}(t)=\dot{\Phi}\left(t, t_{0}\right) x\left(t_{0}\right)+\int_{t_{0}}^{t} \dot{\Phi}(t, \tau) B(\tau) u(\tau) d \tau+\Phi(t, t) B(t) u(t) \ \tag{11.5}\]

    Using (11.2) and (11.3),

    \[\dot{x}(t)=A(t) \Phi\left(t, t_{0}\right) x\left(t_{0}\right)+\int_{t_{0}}^{t} A(t) \Phi(t, \tau) B(\tau) u(\tau) d \tau+B(t) u(t) \ \tag{11.6}\]

    Now, since the integral is taken with respect to \(\tau\), \(A(t)\) can be factored out:

    \[\begin{aligned}
    \dot{x}(t) &=A(t)\left[\Phi\left(t, t_{0}\right) x\left(t_{0}\right)+\int_{t_{0}}^{t} \Phi(t, \tau) B(\tau) u(\tau) d \tau\right]+B(t) u(t) \ (11.7) \\
    &=A(t) x(t)+B(t) u(t) \ \ \ \ (11.8)
    \end{aligned}\nonumber\]

    so the expression in (11.4) does indeed satisfy the state evolution equation. To verify that it also matches the specified initial condition, note that

    \[x\left(t_{0}\right)=\Phi\left(t_{0}, t_{0}\right) x\left(t_{0}\right)=x\left(t_{0}\right) \ \tag{11.9}\]

    We have now shown that the matrix function \(\psi(t, \tau)\) satisfying (11.2) and (11.3) yields the solution to the continuous-time system equation (11.1).

    Exercise

    Show that \(\psi(t, \tau)\) must be nonsingular. (Hint: Invoke our claim about uniqueness of solutions.)

    Answer

    The question that remains is how to find the state transition matrix. For a general linear time-varying system, there is no analytical expression that expresses \(\psi(t, \tau)\) analytically as a function of \(A(t)\). Instead, we are essentially limited to numerical solution of the equation (11.2) with the boundary condition (11.3). This equation may be solved one column at a time, as follows. We numerically compute the respective solutions xi(t) of the homogeneous equation

    \[\dot{x}(t)=A(t) x(t) \ \tag{11.10}\]

    for each of the n initial conditions below:

    \[x^{1}\left(t_{0}\right)=\left[\begin{array}{c}
    1 \\
    0 \\
    0 \\
    0 \\
    \vdots \\
    0
    \end{array}\right], \quad x^{2}\left(t_{0}\right)=\left[\begin{array}{c}
    0 \\
    1 \\
    0 \\
    0 \\
    \vdots \\
    0
    \end{array}\right], \quad \ldots, \quad x^{n}\left(t_{0}\right)=\left[\begin{array}{c}
    0 \\
    0 \\
    0 \\
    \vdots \\
    1
    \end{array}\right]\nonumber\]

    Then

    \[\Phi\left(t, t_{0}\right)=\left[\begin{array}{lll}
    x^{1}(t) & \ldots & x^{n}(t)
    \end{array}\right] \ \tag{11.11}\]

    In summary, knowing \(n\) solutions of the homogeneous system for \(n\) independent initial conditions, we are able to construct the general solution of this linear time varying system. The underlying reason this construction works is that solutions of a linear system may be superposed, and our system is of order \(n\).

    Example 11.1 A Special Case

    Consider the following time-varying system

    \[\frac{d}{d t}\left[\begin{array}{l}
    x_{1}(t) \\
    x_{2}(t)
    \end{array}\right]=\left[\begin{array}{cc}
    \alpha(t) & \beta(t) \\
    -\beta(t) & \alpha(t)
    \end{array}\right]\left[\begin{array}{l}
    x_{1}(t) \\
    x_{2}(t)
    \end{array}\right]\nonumber\]

    where \(\alpha(t)\) and \(\beta(t)\) are continuous functions of \(t\). It turns out that the special structure of the matrix \(A(t)\) here permits an analytical solution. Specifically, verify that the state transition matrix of the system is

    \[\Phi\left(t, t_{0}\right)=\left[\begin{array}{cc}
    \exp \left(\int_{t_{0}}^{t} \alpha(\tau) d \tau\right) \cos \left(\int_{t_{0}}^{t} \beta(\tau) d \tau\right) & \exp \left(\int_{t_{0}}^{t} \alpha(\tau) d \tau\right) \sin \left(\int_{t_{0}}^{t} \beta(\tau) d \tau\right) \\
    -\exp \left(\int_{t_{0}}^{t} \alpha(\tau) d \tau\right) \sin \left(\int_{t_{0}}^{t} \beta(\tau) d \tau\right) & \exp \left(\int_{t_{0}}^{t} \alpha(\tau) d \tau\right) \cos \left(\int_{t_{0}}^{t} \beta(\tau) d \tau\right)
    \end{array}\right]\nonumber\]

    The secret to solving the above system - or equivalently, to obtaining its state transition matrix - is to transform it to polar co-ordinates via the definitions

    \[\begin{aligned}
    r^{2}(t) &=\left(x_{1}\right)^{2}(t)+\left(x_{2}\right)^{2}(t) \\
    \theta(t) &=\tan ^{-1}\left(\frac{x_{2}}{x_{1}}\right)
    \end{aligned}\nonumber\]

    We leave you to deduce now that

    \[\begin{aligned}
    \frac{d}{d t} r^{2} &=2 \alpha r^{2} \\
    \frac{d}{d t} \theta &=-\beta
    \end{aligned}\nonumber\]

    The solution of this system of equations is then given by

    \[r^{2}(t)=\exp \left(2 \int_{t_{0}}^{t} \alpha(\tau) d \tau\right) r^{2}\left(t_{0}\right)\nonumber\]

    and

    \[\theta(t)=\theta\left(t_{0}\right)-\int_{t_{0}}^{t} \beta(\tau) d \tau\nonumber\]

    Further Properties of the State Transition Matrix

    The first property that we present involves the composition of the state transition matrix evaluated over different intervals. Suppose that at an arbitrary time \(t_{0}\) the state vector is \(x(t_{0}) = x_{0}\), with \(x_{0}\) being an arbitrary vector. In the absence of an input the state vector at time \(t\) is given by \(x(t)=\Phi\left(t, t_{0}\right) x_{0}\). At any other time \(t_{1}\), the state vector is given by \(x(t_{1})=\Phi\left(t_{1}, t_{0}\right) x_{0}\). We can also write

    \[\begin{aligned}
    x(t) &=\Phi\left(t, t_{1}\right) x\left(t_{1}\right)=\Phi\left(t, t_{1}\right) \Phi\left(t_{1}, t_{0}\right) x_{0} \\
    &=\Phi\left(t, t_{0}\right) x_{0}
    \end{aligned}\nonumber\]

    Since \(x_{0}\) is arbitrary, it follows that

    \[\Phi\left(t, t_{1}\right) \Phi\left(t_{1}, t_{0}\right)=\Phi\left(t, t_{0}\right) \nonumber\]

    for any \(t_{0}\) and \(t_{1}\). (Note that since the state transition matrix in CT is always invertible, there is no restriction that \(t_{1}\) lie between \(t_{0}\) and \(t\) - unlike in the DT case, where the state transition matrix may not be invertible).

    Another property of interest (but one whose derivation can be safely skipped on a first reading) involves the determinant of the state transition matrix. We will now show that

    \[\operatorname{det}\left(\Phi\left(t, t_{0}\right)\right)=\exp \left(\int_{t_{0}}^{t} \operatorname{trace}[A(\tau)] d \tau\right) \ \tag{11.12}\]

    a result known as the Jacobi-Liouville formula. Before we derive this important formula, we need the following fact from matrix theory. For an \(n \times n\) matrix \(M\) and a real parameter \(\epsilon\), we have

    \[\operatorname{det}(I+\epsilon M)=1+\epsilon \operatorname{trace}(M)+O\left(\epsilon^{2}\right)\nonumber\]

    where \(O\left(\epsilon^{2}\right)\) denotes the terms of order greater than or equal to \(\epsilon^{2}\). In order to verify this fact, let \(U\) be a similarity transformation that brings \(M\) to an upper triangular matrix \(T\), so \(M = U^{-1}T U\). Such a \(U\) can always be found, in many ways. (One way, for a diagonalizable matrix, is to pick \(U\) to be the modal matrix of \(M\), in which case \(T\) is actually diagonal; there is a natural extension of this approach in the non-diagonalizable case.) Then the eigenvalues {\(\lambda_{i}\)} of \(M\) and \(T\) are identical, because similarity transformations do not change eigenvalues, and these numbers are precisely the diagonal elements of \(T\). Hence

    \[\begin{aligned}
    \operatorname{det}(I+\epsilon M) &=\operatorname{det}(I+\epsilon T) \\
    &=\Pi_{i=1}^{n}\left(1+\epsilon \lambda_{i}\right) \\
    &=1+\epsilon \operatorname{trace}(M)+O\left(\epsilon^{2}\right)
    \end{aligned}\nonumber\]

    Returning to the proof of (11.12), first observe that

    \[\begin{aligned}
    \Phi\left(t+\epsilon, t_{0}\right) &=\Phi\left(t, t_{0}\right)+\epsilon \frac{d}{d t} \Phi\left(t, t_{0}\right)+O\left(\epsilon^{2}\right) \\
    &=\Phi\left(t, t_{0}\right)+\epsilon A(t) \Phi\left(t, t_{0}\right)+O\left(\epsilon^{2}\right)
    \end{aligned}\nonumber\]

    The derivative of the determinant of \(\psi(t, t_{0})\) is given by

    \[\begin{aligned}
    \frac{d}{d t} \operatorname{det}\left[\Phi\left(t, t_{0}\right)\right] &=\lim _{\epsilon \rightarrow 0} \frac{1}{\epsilon}\left(\operatorname{det}\left[\Phi\left(t+\epsilon, t_{0}\right)\right]-\operatorname{det}\left[\Phi\left(t, t_{0}\right)\right]\right) \\
    &=\lim _{\epsilon \rightarrow 0} \frac{1}{\epsilon}\left(\operatorname{det}\left[\Phi\left(t, t_{0}\right)+\epsilon A(t) \Phi\left(t, t_{0}\right)\right]-\operatorname{det}\left[\Phi\left(t, t_{0}\right)\right]\right) \\
    &=\operatorname{det}\left(\Phi\left(t, t_{0}\right)\right) \lim _{\epsilon \rightarrow 0} \frac{1}{\epsilon}(\operatorname{det}[I+\epsilon A(t)]-1) \\
    &=\operatorname{trace}[A(t)] \operatorname{det}\left[\Phi\left(t, t_{0}\right)\right]
    \end{aligned}\nonumber\]

    Integrating the above equation yields the desired result, (11.12).


    This page titled 11.1: The Time Varying Case is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mohammed Dahleh, Munther A. Dahleh, and George Verghese (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.