Skip to main content
Engineering LibreTexts

16.4: Further Concepts in Linear Algebra

  • Page ID
    55682
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Begin Advanced Material

    Column Space and Null Space

    Let us introduce more concepts in linear algebra. First is the column space. The column space of matrix \(A\) is a space of vectors that can be expressed as \(A x\). From the column interpretation of matrix-vector product, we recall that \(A x\) is a linear combination of the columns of \(A\) with the weights provided by \(x\). We will denote the column space of \(A \in \mathbb{R}^{m \times n}\) by \(\operatorname{col}(A)\), and the space is defined as \[\operatorname{col}(A)=\left\{v \in \mathbb{R}^{m}: v=A x \text { for some } x \in \mathbb{R}^{n}\right\} .\] The column space of \(A\) is also called the image of \(A, \operatorname{img}(A)\), or the range of \(A\), \(\operatorname{range}(A)\).

    The second concept is the null space. The null space of \(A \in \mathbb{R}^{m \times n}\) is denoted by \(\operatorname{null}(A)\) and is defined as \[\operatorname{null}(A)=\left\{x \in \mathbb{R}^{n}: A x=0\right\},\] i.e., the null space of \(A\) is a space of vectors that results in \(A x=0\). Recalling the column interpretation of matrix-vector product and the definition of linear independence, we note that the columns of \(A\) must be linearly dependent in order for \(A\) to have a non-trivial null space. The null space defined above is more formally known as the right null space and also called the kernel of \(A\), \(\operatorname{ker}(A)\).

    Example 16.4.1 column space and null space

    Let us consider a \(3 \times 2\) matrix \[A=\left(\begin{array}{ll} 0 & 2 \\ 1 & 0 \\ 0 & 0 \end{array}\right) \text {. }\] The column space of \(A\) is the set of vectors representable as \(A x\), which are \[A x=\left(\begin{array}{ll} 0 & 2 \\ 1 & 0 \\ 0 & 0 \end{array}\right)\left(\begin{array}{l} x_{1} \\ x_{2} \end{array}\right)=\left(\begin{array}{l} 0 \\ 1 \\ 0 \end{array}\right) \cdot x_{1}+\left(\begin{array}{l} 2 \\ 0 \\ 0 \end{array}\right) \cdot x_{2}=\left(\begin{array}{c} 2 x_{2} \\ x_{1} \\ 0 \end{array}\right) .\] So, the column space of \(A\) is a set of vectors with arbitrary values in the first two entries and zero in the third entry. That is, \(\operatorname{col}(A)\) is the 1-2 plane in \(\mathbb{R}^{3}\).

    Because the columns of \(A\) are linearly independent, the only way to realize \(A x=0\) is if \(x\) is the zero vector. Thus, the null space of \(A\) consists of the zero vector only.

    Let us now consider a \(2 \times 3\) matrix \[B=\left(\begin{array}{ccc} 1 & 2 & 0 \\ 2 & -1 & 3 \end{array}\right) .\] The column space of \(B\) consists of vectors of the form \[B x=\left(\begin{array}{ccc} 1 & 2 & 0 \\ 2 & -1 & 3 \end{array}\right)\left(\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \end{array}\right)=\left(\begin{array}{c} x_{1}+2 x_{2} \\ 2 x_{1}-x_{2}+3 x_{3} \end{array}\right) .\] By judiciously choosing \(x_{1}, x_{2}\), and \(x_{3}\), we can express any vectors in \(\mathbb{R}^{2}\). Thus, the column space of \(B\) is entire \(\mathbb{R}^{2}\), i.e., \(\operatorname{col}(B)=\mathbb{R}^{2}\).

    Because the columns of \(B\) are not linearly independent, we expect \(B\) to have a nontrivial null space. Invoking the row interpretation of matrix-vector product, a vector \(x\) in the null space must satisfy \[x_{1}+2 x_{2}=0 \quad \text { and } \quad 2 x_{1}-x_{2}+3 x_{3}=0 .\] The first equation requires \(x_{1}=-2 x_{2}\). The combination of the first requirement and the second equation yields \(x_{3}=\frac{5}{3} x_{2}\). Thus, the null space of \(B\) is \[\operatorname{null}(B)=\left\{\alpha \cdot\left(\begin{array}{c} -2 \\ 1 \\ 5 / 3 \end{array}\right): \alpha \in \mathbb{R}\right\}\] Thus, the null space is a one-dimensional space (i.e., a line) in \(\mathbb{R}^{3}\).

    Projectors

    Another important concept - in particular for least squares covered in Chapter 17 - is the concept of projector. A projector is a square matrix \(P\) that is idempotent, i.e. \[P^{2}=P P=P .\] Let \(v\) be an arbitrary vector in \(\mathbb{R}^{m}\). The projector \(P\) projects \(v\), which is not necessary in \(\operatorname{col}(P)\), onto \(\operatorname{col}(P)\), i.e. \[w=P v \in \operatorname{col}(P), \quad \forall v \in \mathbb{R}^{m} .\] In addition, the projector \(P\) does not modify a vector that is already in \(\operatorname{col}(P)\). This is easily verified because \[P w=P P v=P v=w, \quad \forall w \in \operatorname{col}(P) .\] Intuitively, a projector projects a vector \(v \in \mathbb{R}^{m}\) onto a smaller space \(\operatorname{col}(P)\). If the vector is already in \(\operatorname{col}(P)\), then it would be left unchanged.

    The complementary projector of \(P\) is a projector \(I-P\). It is easy to verify that \(I-P\) is a projector itself because \[(I-P)^{2}=(I-P)(I-P)=I-2 P+P P=I-P .\] It can be shown that the complementary projector \(I-P\) projects onto the null space of \(P\), null \((P)\).

    When the space along which the projector projects is orthogonal to the space onto which the projector projects, the projector is said to be an orthogonal projector. Algebraically, orthogonal projectors are symmetric.

    When an orthonormal basis for a space is available, it is particularly simple to construct an orthogonal projector onto the space. Say \(\left\{q_{1}, \ldots, q_{n}\right\}\) is an orthonormal basis for a \(n\)-dimensional subspace of \(\mathbb{R}^{m}, n<m\). Given any \(v \in \mathbb{R}^{m}\), we recall that \[u_{i}=q_{i}^{\mathrm{T}} v\] is the component of \(v\) in the direction of \(q_{i}\) represented in the basis \(\left\{q_{i}\right\}\). We then introduce the vector \[w_{i}=q_{i}\left(q_{i}^{\mathrm{T}} v\right)\] the sum of such vectors would produce the projection of \(v \in \mathbb{R}^{m}\) onto \(V\) spanned by \(\left\{q_{i}\right\}\). More compactly, if we form an \(m \times n\) matrix \[Q=\left(\begin{array}{lll} q_{1} & \cdots & q_{n} \end{array}\right)\] then the projection of \(v\) onto the column space of \(Q\) is \[w=Q\left(Q^{\mathrm{T}} v\right)=\left(Q Q^{\mathrm{T}}\right) v .\] We recognize that the orthogonal projector onto the span of \(\left\{q_{i}\right\}\) or \(\operatorname{col}(Q)\) is \[P=Q Q^{\mathrm{T}} .\] Of course \(P\) is symmetric, \(\left(Q Q^{\mathrm{T}}\right)^{\mathrm{T}}=\left(Q^{\mathrm{T}}\right)^{\mathrm{T}} Q^{\mathrm{T}}=Q Q^{\mathrm{T}}\), and idempotent, \(\left(Q Q^{\mathrm{T}}\right)\left(Q Q^{\mathrm{T}}\right)=\) \(Q\left(Q^{\mathrm{T}} Q\right) Q^{\mathrm{T}}=Q Q^{\mathrm{T}} .\)

    Advanced Material


    This page titled 16.4: Further Concepts in Linear Algebra is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Masayuki Yano, James Douglass Penn, George Konidaris, & Anthony T Patera (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.