Skip to main content
Engineering LibreTexts

1.6: Exercises

  • Page ID
    24232
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Exercise 1.1 Partitioned Matrices

    Suppose

    \[A=\left(\begin{array}{ll}
    A_{1} & A_{2} \\
    0 & A_{4}
    \end{array}\right)\nonumber\]

    with \(A_{1}\) and \(A_{4}\) square

    (a) Write the determinant det A in terms of det \(A_{1}\) and det \(A_{4}\). (Hint: Write A as the product

    \[\left(\begin{array}{cc}
    I & 0 \\
    0 & A_{4}
    \end{array}\right)\left(\begin{array}{cc}
    A_{1} & A_{2} \\
    0 & I
    \end{array}\right)\nonumber\]

    and use the fact that the determinant of the product of two square matrices is the product of the individual determinants - the individual determinants are easy to evaluate in this case.)

    (b) Assume for this part that \(A_{1}\) and \(A_{4}\) are nonsingular (i.e., square and invertible). Now Find \({A}^{-1}\). (Hint: Write AB = I and partition B and I commensurably with the partitioning of A.)

    Exercise 1.2 Partitioned Matrices

    Suppose

    \[A=\left(\begin{array}{ll}
    A_{1} & A_{2} \\
    A_{3} & A_{4}
    \end{array}\right)\nonumber\]

    where the \(A_{i}\) are matrices of conformable dimension.

    (a) What can A be premultiplied by to get the matrix

    \[\left(\begin{array}{ll}
    A_{3} & A_{4} \\
    A_{1} & A_{2}
    \end{array}\right) ?\nonumber\]

    (b) Assume that \(A_{1}\) is nonsingular. What can A be premultiplied by to get the matrix

    \[\left(\begin{array}{ll}
    A_{1} & A_{2} \\
    0 & C
    \end{array}\right)\nonumber\]

    where \(C=A_{4}-A_{3} A_{1}^{-1} A_{2}\) ?

    (c) Suppose A is a square matrix. Use the result in (b) - and the fact mentioned in the hint to Problem 1(a) - to obtain an expression for det(A) in terms of determinants involving only the submatrices \(A_{1}\), \(A_{2}\), \(A_{3}\), \(A_{4}\).

    Exercise 1.3 Matrix identities

    Prove the following very useful matrix identities. In proving identities such as these, see if you can obtain proofs that make as few assumptions as possible beyond those implied by the problem statement. For example, in (1) and (2) below, neither A nor B need be square, and in (3) neither B nor D need be square - so avoid assuming that any of these matrices is (square and) invertible!.

    (a) det(I - AB) = det(I - BA), if A is p x q and B is q x p. (Hint: Evaluate the determinants of

    \[\left(\begin{array}{cc}
    I & A \\
    B & I
    \end{array}\right)\left(\begin{array}{cc}
    I & -A \\
    0 & I
    \end{array}\right), \quad\left(\begin{array}{cc}
    I & -A \\
    0 & I
    \end{array}\right)\left(\begin{array}{cc}
    I & A \\
    B & I
    \end{array}\right)\nonumber\]

    to obtain the desired result). One common situation in which the above result is useful is when p > q; why is this so?

    (b) Show that \(({I-AB})^{-1} A= A ({I-BA})^{-1}\)

    (c) Show that \((A+B C D)^{-1}=A^{-1}-A^{-1} B\left(C^{-1}+D A^{-1} B\right)^{-1} D A^{-1}\). (Hint: Multiply the right side by A + BCD and cleverly gather terms.) This is perhaps the most used of matrix identities, and is known by various names - the matrix inversion lemma, the ABCD lemma (!), Woodbury's formula, etc. It is rediscovered from time to time in different guises. Its noteworthy feature is that, if \({A}^{-1}\) is known, then the inverse of a modification of A is expressed as a modification of \({A}^{-1}\) that may be simple to compute, e.g. when C is of small dimensions. Show, for instance, that evaluation of \(({I-{ab}^T})^{-1}\), where a and b are column vectors, only requires inversion of a scalar quantity.

    Exercise 1.4 Range and Rank

    This is a practice problem in linear algebra (except that you have perhaps only seen such results stated for the case of real matrices and vectors, rather than complex ones - the extensions are routine).

    Assume that \(A \in {C}^{m * n}\) (i.e., A is a complex m x n matrix) and \(B \in {C}^{n * p}\). We shall use the symbols \(\mathcal{R}(A)\) and \(\mathcal{N}(A)\) to respectively denote the range space and null space (or kernel) of the matrix A. Following the Matlab convention, we use the symbol \(A^{\prime}\) to denote the transpose of the complex conjugate of the matrix \(A;\mathcal{R}^{\perp}(A)\) denotes the subspace orthogonal to the subspace \(\mathcal{R}(A)\), i.e. the set of vectors x such that \(x^{\prime}y=0 ,\ \forall y \in \mathcal{R}(A)\), etc.

    (a) Show that \(\mathcal{R}^{\perp}(A)= \mathcal{N}(A^{\prime})\) and \(\mathcal{N}^{\perp}(A)= \mathcal{R}(A^{\prime})\).

    (b) Show that

    \[\operatorname{rank}(A)+\operatorname{rank}(B)-n \leq \operatorname{rank}(A B) \leq \min \{\operatorname{rank}(A), \operatorname{rank}(B)\}\nonumber\]

    This result is referred to as Sylvester's inequality.

    Exercise 1.5 Vandermonde Matrix

    A matrix with the following structure is referred to as a Vandermonde matrix:

    \[\left(\begin{array}{ccccc}
    1 & \lambda_{1} & \lambda_{1}^{2} & \cdots & \lambda_{1}^{n-1} \\
    1 & \lambda_{2} & \lambda_{2}^{2} & \cdots & \lambda_{2}^{n-1} \\
    \vdots & \vdots & \vdots & \cdots & \vdots \\
    1 & \lambda_{n} & \lambda_{n}^{2} & \cdots & \lambda_{n}^{n-1}
    \end{array}\right)\nonumber\]

    This matrix is clearly singular if the \({\lambda}_{i}\) are not all distinct. Show the converse, namely that if all n of the \({\lambda}_{i}\) are distinct, then the matrix is nonsingular. One way to do this - although not the easiest! - is to show by induction that the determinant of the Vandermonde matrix is

    \[\prod_{i=1 ; j>i}^{i, j=n}\left(\lambda_{j}-\lambda_{i}\right)\nonumber\]

    Look for an easier argument first.

    Exercise 1.6 Matrix Derivatives

    (a) Suppose A(t) and B(t) are matrices whose entries are differentiable functions of t, and assume the product A(t)B(t) is well-defined. Show that

    \[\frac{d}{d t}(A(t) B(t))=\frac{d A(t)}{d t} B(t)+A(t) \frac{d B(t)}{d t}\nonumber\]

    where the derivative of a matrix is, by definition, the matrix of derivatives - i.e., to obtain the derivative of a matrix, simply replace each entry of the matrix by its derivative. (Note: The ordering of the matrices in the above result is important!).

    (b) Use the result of (a) to evaluate the derivative of the inverse of a matrix A(t), i.e. evaluate the derivative of \({A}^{-1}(t)\).

    Exercise 1.7

    Suppose T is a linear transformation from X to itself. Verify that any two matrix representations, A and B, of T are related by a nonsingular transformation; i.e., \(A ={R}^{-1} B R\) for some R. Show that as R varies over all nonsingular matrices, we get all possible representations.

    Exercise 1.8

    Let X be the vector space of polynomials of order less than or equal to M.

    (a) Show that the set \(B=\left\{1, x, \ldots x^{M}\right\}\) is a basis for this vector space.

    (b) Consider the mapping T from X to X defined as:

    \[f(x)=T g(x)=\frac{d}{d x} g\tag{x}\]

    1. Show that T is linear.
    2. Derive a matrix representation for T in terms of the basis B.
    3. What are the eigenvalues of T.
    4. Compute one eigenvector associated with one of the eighenvalues.

    This page titled 1.6: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mohammed Dahleh, Munther A. Dahleh, and George Verghese (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.