Skip to main content
Engineering LibreTexts

11.2: The LTI Case

  • Page ID
    24299
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    For linear time-invariant systems in continuous time, it is possible to give an explicit formula for the state transition matrix, \(\Phi(t, \tau)\). In this case \(A(t) = A\), a constant matrix. Let us define the matrix exponential of \(A\) by an infinite series of the same form that is (or may be) used to define the scalar exponential:

    \[\begin{aligned}
    e^{\left(t-t_{0}\right) A} &=I+\left(t-t_{0}\right) A+\frac{1}{2 !}\left(t-t_{0}\right)^{2} A^{2}+\ldots \\
    &=\sum_{k=0}^{\infty} \frac{1}{k !}\left(t-t_{0}\right)^{k} A^{k}
    \end{aligned} \ \tag{11.13}\]

    It turns out that this series is as nicely behaved as in the scalar case: it converges absolutely for all \(A \in \mathbb{R}^{n \times n}\) and for all \(t \in \mathbb{R}\), and it can be differentiated or integrated term by term. There exist methods for computing it, although the task is fraught with numerical difficulties.

    With the above definition, it is easy to verify that the matrix exponential satisfies the defining conditions (11.2) and (11.3) for the state transition matrix. The solution of (11.1) in the LTI case is therefore given by

    \[x(t)=e^{\left(t-t_{0}\right) A} x\left(t_{0}\right)+\int_{t_{0}}^{t} e^{A(t-\tau)} B u(\tau) d \tau \ \tag{11.14}\]

    After determining \(x(t)\), the system output can be obtained by

    \[y(t)=C x(t)+D u(t) \ \tag{11.15}\]

    Transform-Domain Solution of LTI Models

    We can now parallel our transform-domain treatment of the DT case, except that now we use the one-sided Laplace transform instead of the \(\mathcal{Z}\)-transform :

    Definition 11.1: One-sided Laplace Transform

    The one-sided Laplace transform, \(F (s)\), of the signal \(f (t)\) is given by

    \[F(s)=\int_{t=0-}^{\infty} e^{-s t} f(t) d t \nonumber\]

    for all \(s\) where the integral is defined, denoted by the region of convergence (R.O.C.).

    The various properties of the Laplace transform follow. The shift property of \(\mathcal{Z}\) transforms that we used in the DT case is replaced by the following differentiation property: Suppose that \(f(t) \stackrel{\mathcal{L}}{\longleftrightarrow} F(s)\). Then

    \[g(t)=\frac{d f(t)}{d t} \Longrightarrow G(s)=s F(s)-f\tag{0-}\]

    Now, given the state-space model (11.1) in the LTI case, we can take transforms on both sides of the equations there. Using the transform property just described, we get

    \[\begin{aligned}
    s X(s)-x(0-) &=A X(s)+B U(s) \ (11.16) \\
    Y(s) &=C X(s)+D U(s) \ (11.17)
    \end{aligned}\nonumber\]

    This is solved to yield

    \[\begin{array}{ll}
    X(s)= & (s I-A)^{-1} x(0-)+(s I-A)^{-1} B U(s) \\
    Y(s)= & C(s I-A)^{-1} x(0-)+\underbrace{\left[C(s I-A)^{-1} B+D\right]}_{\text {Transfer Function }} U(s) \ (10.18)
    \end{array}\nonumber\]

    which is very similar to the DT case.

    An important fact that emerges on comparing (11.18) with its time-domain version (11.14) is that

    \[\mathcal{L}\left(e^{A t}\right)=(s I-A)^{-1} \nonumber\]

    Therefore one way to compute the state transition matrix (a good way for small examples!) is by evaluating the entry-by-entry inverse transform of \((sI - A)^{-1}\).

    Example 11.2

    Find the state transition matrix associated with the (non-diagonalizable!) matrix

    \[A=\left[\begin{array}{ll}
    1 & 2 \\
    0 & 1
    \end{array}\right] \nonumber\]

    Using the above formula,

    \[\begin{aligned}
    \mathcal{L}\left(e^{A t}\right) &=(s I-A)^{-1}=\left[\begin{array}{cc}
    s-1 & -2 \\
    0 & s-1
    \end{array}\right]^{-1} \\
    &=\left[\begin{array}{cc}
    \frac{1}{s-1} & \frac{2}{(s-1)^{2}} \\
    0 & \frac{1}{s-1}
    \end{array}\right]
    \end{aligned} \nonumber\]

    By taking the inverse Laplace transform of the above matrix we get

    \[e^{A t}=\left[\begin{array}{cc}
    e^{t} & 2 t e^{t} \\
    0 & e^{t}
    \end{array}\right] \nonumber\]


    This page titled 11.2: The LTI Case is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mohammed Dahleh, Munther A. Dahleh, and George Verghese (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.