Skip to main content
Engineering LibreTexts

15.9: Orthonormal Basis Expansions

  • Page ID
    23199
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Main Idea

    When working with signals many times it is helpful to break up a signal into smaller, more manageable parts. Hopefully by now you have been exposed to the concept of eigenvectors (Section 14.2) and there use in decomposing a signal into one of its possible basis. By doing this we are able to simplify our calculations of signals and systems through eigenfunctions of LTI systems (Section 14.5).

    Now we would like to look at an alternative way to represent signals, through the use of orthonormal basis. We can think of orthonormal basis as a set of building blocks we use to construct functions. We will build up the signal/vector as a weighted sum of basis elements.

    Example \(\PageIndex{1}\)

    The complex sinusoids \(\frac{1}{\sqrt{T}} e^{j \omega_0 nt}\) for all \(-\infty<n<\infty\) form an orthonormal basis for \(L^{2}([0, T])\).

    In our Fourier series equation, \(f(t)=\sum_{n=-\infty}^{\infty} c_{n} e^{j \omega_{0} n t}\), the \(\left\{c_{n}\right\}\) are just another representation of \(f(t)\).

    Note

    For signals/vectors in a Hilbert Space, the expansion coefficients are easy to find.

    Alternate Representation

    Recall our definition of a basis: A set of vectors \(\left\{b_{i}\right\}\) in a vector space \(S\) is a basis if

    1. The \(b_i\) are linearly independent.
    2. The \(b_i\) span \(S\). That is, we can find \(\left\{\alpha_{i}\right\}\), where \(\alpha_{i} \in \mathbb{C}\) (scalars) such that

      \[\boldsymbol{x}=\sum_{i} \alpha_{i} b_{i}, x \in S \nonumber \]

      where \(\boldsymbol{x}\) is a vector in \(S\), \(\alpha\) is a scalar in \(\mathbb{C}\), and \(\boldsymbol{b}\) is a vector in \(S\).

    Condition 2 in the above definition says we can decompose any vector in terms of the \(\left\{b_{i}\right\}\). Condition 1 ensures that the decomposition is unique (think about this at home).

    Note

    The \(\left\{\alpha_{i}\right\}\) provide an alternate representation of \(\boldsymbol{x}\).

    Example \(\PageIndex{2}\)

    Let us look at simple example in \(\mathbb{R}^2\), where we have the following vector:

    \[\boldsymbol{x}=\left(\begin{array}{l}
    1 \\
    2
    \end{array}\right) \nonumber \]

    Standard Basis: \(\left\{e_{0}, e_{1}\right\}=\left\{(1,0)^{T},(0,1)^{T}\right\}\)

    \[\boldsymbol{x}=e_{0}+2 e_{1} \nonumber \]

    Alternate Basis: \(\left\{h_{0}, h_{1}\right\}=\left\{(1,1)^{T},(1,-1)^{T}\right\}\)

    \[\boldsymbol{x}=\frac{3}{2} h_{0}+\frac{-1}{2} h_{1} \nonumber \]

    In general, given a basis \(\left\{b_{0}, b_{1}\right\}\) and a vector \(\boldsymbol{x} \in \mathbb{R}^2\), how do we find the \(\alpha_0\) and \(\alpha_1\) such that

    \[\boldsymbol{x}=\alpha_{0} b_{0}+\alpha_{1} b_{1} \label{15.48} \]

    Finding the Coefficients

    Now let us address the question posed above about finding \(\alpha_i\)'s in general for \(\mathbb{R}^2\). We start by rewriting Equation \ref{15.48} so that we can stack our \(b_i\)'s as columns in a \(2 \times 2\) matrix.

    \[(x)=\alpha_{0}\left(b_{0}\right)+\alpha_{1}\left(b_{1}\right) \label{15.49} \]

    \[(x)=\left(\begin{array}{ccc}
    \vdots & \vdots \\
    b_{0} & b_{1} \\
    \vdots & \vdots
    \end{array}\right)\left(\begin{array}{l}
    a_{0} \\
    a_{1}
    \end{array}\right) \label{15.50} \]

    Example \(\PageIndex{3}\)

    Here is a simple example, which shows a little more detail about the above equations.

    \[\begin{align}
    \left(\begin{array}{c}
    x[0] \\
    x[1]
    \end{array}\right) &=\alpha_{0}\left(\begin{array}{c}
    b_{0}[0] \\
    b_{0}[1]
    \end{array}\right)+\alpha_{1}\left(\begin{array}{c}
    b_{1}[0] \\
    b_{1}[1]
    \end{array}\right) \nonumber \\
    &=\left(\begin{array}{c}
    \alpha_{0} b_{0}[0]+\alpha_{1} b_{1}[0] \\
    \alpha_{0} b_{0}[1]+\alpha_{1} b_{1}[1]
    \end{array}\right)
    \end{align} \nonumber \]

    \[\left(\begin{array}{l}
    x[0] \\
    x[1]
    \end{array}\right)=\left(\begin{array}{ll}
    b_{0}[0] & b_{1}[0] \\
    b_{0}[1] & b_{1}[1]
    \end{array}\right)\left(\begin{array}{l}
    \alpha_{0} \\
    \alpha_{1}
    \end{array}\right) \nonumber \]

    Simplifying our Equation

    To make notation simpler, we define the following two items from the above equations:

    • Basis Matrix:

      \[B=\left(\begin{array}{cc}
      \vdots & \vdots \\
      b_{0} & b_{1} \\
      \vdots & \vdots
      \end{array}\right) \nonumber \]

    • Coefficient Vector:

      \[\boldsymbol{\alpha}=\left(\begin{array}{l}
      \alpha_{0} \\
      \alpha_{1}
      \end{array}\right) \nonumber \]

    This gives us the following, concise equation:

    \[\boldsymbol{x}=B \boldsymbol{\alpha} \label{15.53} \]

    which is equivalent to \(\boldsymbol{x}=\sum_{i=0}^{1} \alpha_{i} b_{i}\).

    Example \(\PageIndex{4}\)

    Given a standard basis, \(\left\{\left(\begin{array}{l}
    1 \\
    0
    \end{array}\right),\left(\begin{array}{l}
    0 \\
    1
    \end{array}\right)\right\}\), then we have the following basis matrix:

    \[B=\left(\begin{array}{ll}
    0 & 1 \\
    1 & 0
    \end{array}\right)\nonumber \]

    To get the \(\alpha_i\)s, we solve for the coefficient vector in Equation \ref{15.53}

    \[\boldsymbol{\alpha}=B^{-1} \boldsymbol{x} \label{15.54} \]

    Where \(B^{-1}\) is the inverse matrix of \(B\).

    Examples

    Example \(\PageIndex{5}\)

    Let us look at the standard basis first and try to calculate \(\boldsymbol{\alpha}\) from it.

    \[B=\left(\begin{array}{ll}
    1 & 0 \\
    0 & 1
    \end{array}\right)=I \nonumber \]

    Where \(I\) is the identity matrix. In order to solve for \(\boldsymbol{\alpha}\) let us find the inverse of \(B\) first (which is obviously very trivial in this case):

    \[B^{-1}=\left(\begin{array}{ll}
    1 & 0 \\
    0 & 1
    \end{array}\right) \nonumber \]

    Therefore we get,

    \[\boldsymbol{\alpha}=B^{-1} \boldsymbol{x}=\boldsymbol{x} \nonumber \]

    Example \(\PageIndex{6}\)

    Let us look at a ever-so-slightly more complicated basis of \(\left\{\left(\begin{array}{l}
    1 \\
    1
    \end{array}\right),\left(\begin{array}{c}
    1 \\
    -1
    \end{array}\right)\right\}=\left\{h_{0}, h_{1}\right\}\) Then our basis matrix and inverse basis matrix becomes:

    \[\begin{aligned}
    B &=\left(\begin{array}{cc}
    1 & 1 \\
    1 & -1
    \end{array}\right) \\
    B^{-1} &=\left(\begin{array}{cc}
    \frac{1}{2} & \frac{1}{2} \\
    \frac{1}{2} & \frac{-1}{2}
    \end{array}\right)
    \end{aligned} \nonumber \]

    and for this example it is given that

    \[\boldsymbol{x}=\left(\begin{array}{l}
    3 \\
    2
    \end{array}\right) \nonumber \]

    Now we solve for \(\boldsymbol{\alpha}\)

    \[\boldsymbol{\alpha}=B^{-1} \boldsymbol{x}=\left(\begin{array}{cc}
    \frac{1}{2} & \frac{1}{2} \\
    \frac{1}{2} & \frac{-1}{2}
    \end{array}\right)\left(\begin{array}{l}
    3 \\
    2
    \end{array}\right)=\left(\begin{array}{l}
    2.5 \\
    0.5
    \end{array}\right) \nonumber \]

    and we get

    \[\boldsymbol{x}=2.5 h_{0}+0.5 h_{1} \nonumber \]

    Exercise \(\PageIndex{1}\)

    Now we are given the following basis matrix and \(\boldsymbol{x}\):

    \[\begin{array}{c}
    \left\{b_{0}, b_{1}\right\}=\left\{\left(\begin{array}{l}
    1 \\
    2
    \end{array}\right),\left(\begin{array}{l}
    3 \\
    0
    \end{array}\right)\right\} \\
    \boldsymbol{x}=\left(\begin{array}{l}
    3 \\
    2
    \end{array}\right)
    \end{array} \nonumber \]

    For this problem, make a sketch of the bases and then represent \(\boldsymbol{x}\) in terms of \(b_0\) and \(b_1\).

    Answer

    In order to represent \(\boldsymbol{x}\) in terms of \(b_0\) and \(b_1\) we will follow the same steps we used in the above example.

    \[\begin{array}{c}
    B=\left(\begin{array}{cc}
    1 & 2 \\
    3 & 0
    \end{array}\right) \\
    B^{-1}=\left(\begin{array}{cc}
    0 & \frac{1}{2} \\
    \frac{1}{3} & \frac{-1}{6}
    \end{array}\right) \\
    \boldsymbol{\alpha}=B^{-1} \boldsymbol{x}=\left(\begin{array}{c}
    1 \\
    \frac{2}{3}
    \end{array}\right)
    \end{array} \nonumber \]

    And now we can write \(\boldsymbol{x}\) in terms of \(b_0\) and \(b_1\).
    \[\boldsymbol{x}=b_{0}+\frac{2}{3} b_{1} \nonumber \]
    And we can easily substitute in our known values of \(b_0\) and \(b_1\) to verify our results.

    Note

    A change of basis simply looks at \(\boldsymbol{x}\) from a "different perspective." \(B^{-1}\) transforms \(\boldsymbol{x}\) from the standard basis to our new basis, \(\left\{b_{0}, b_{1}\right\}\). Notice that this is a totally mechanical procedure.

    Extending the Dimension and Space

    We can also extend all these ideas past just \(\mathbb{R}^2\) and look at them in \(\mathbb{R}^n\) and \(\mathbb{C}^n\). This procedure extends naturally to higher (> 2) dimensions. Given a basis \(\left\{b_{0}, b_{1}, \ldots, b_{n-1}\right\}\) for \(\mathbb{R}^n\), we want to find \(\left\{\alpha_{0}, \alpha_{1}, \ldots, \alpha_{n-1}\right\}\) such that

    \[\boldsymbol{x}=\alpha_{0} b_{0}+\alpha_{1} b_{1}+\ldots+\alpha_{n-1} b_{n-1} \label{15.55} \]

    Again, we will set up a basis matrix

    \[B=\left(\begin{array}{lllll}
    b_{0} & b_{1} & b_{2} & \dots & b_{n-1}
    \end{array}\right) \nonumber \]

    where the columns equal the basis vectors and it will always be an \(n \times n\) matrix (although the above matrix does not appear to be square since we left terms in vector notation). We can then proceed to rewrite Equation \ref{15.53}

    \[\boldsymbol{x}=\left(\begin{array}{cccc}
    b_{0} & b_{1} & \ldots & b_{n-1}
    \end{array}\right)\left(\begin{array}{c}
    \alpha_{0} \\
    \vdots \\
    \alpha_{n-1}
    \end{array}\right)=B \boldsymbol{\alpha} \nonumber \]

    and

    \[\boldsymbol{\alpha}=B^{-1} \boldsymbol{x} \nonumber \]


    This page titled 15.9: Orthonormal Basis Expansions is shared under a CC BY license and was authored, remixed, and/or curated by Richard Baraniuk et al..