Skip to main content
Engineering LibreTexts

3.4: Exercises

  • Page ID
    24244
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Exercise 3.1

    Least Square Error Solution We begin with a mini-tutorial on orthogonal and unitary matrices. An orthogonal matrix may be defined as a square real matrix whose columns are of unit length and mutually orthogonal to each other - i.e., its columns form an orthonormal set. It follows quite easily (as you should try and verify for yourself ) that:

    • the inverse of an orthogonal matrix is just its transpose;
    • the rows of an orthogonal matrix form an orthonormal set as well;
    • the usual Euclidean inner product of two real vectors \(v\) and \(w\), namely the scalar \(v^{\prime}w\), equals the inner product of \(Uv\) and \(Uw\), if \(U\) is an orthogonal matrix - and therefore the length of \(v\), namely \(\sqrt{v^{\prime} v}\), equals that of \(Uv\).

    A unitary matrix is similarly defined, except that its entries are allowed to be complex - so its inverse is the complex conjugate of its transpose. A fact about orthogonal matrices that turns out to be important in several numerical algorithms is the following: Given a real \(m \times n\) matrix \(A\) of full column rank, it is possible (in many ways) to find an orthogonal matrix \(U\) such that

    \[U A=\left(\begin{array}{c}
    R \\
    0
    \end{array}\right)\nonumber\]

    where \(R\) is a nonsingular, upper-triangular matrix. (If \(A\) is complex, then we can find a unitary matrix \(U\) that leads to the same equation.) To see how to compute \(U\) in Matlab, read the comments obtained by typing help qr; the matrix \(Q\) that is referred to in the comments is just \(U^{\prime}\).

    We now turn to the problem of interest. Given a real \(m \times n\) matrix \(A\) of full column rank, and a real m-vector \(y\), we wish to approximately satisfy the equation \(y = Ax\). Specifically, let us choose the vector \(x\) to minimize \(\|y-A x\|^{2}=(y-A x)^{\prime}(y-A x)\), the squared Euclidean length of the "error" \(y - Ax\). By invoking the above results on orthogonal matrices, show that (in the notation introduced earlier) the minimizing \(x\) is

    \[\hat{x}=R^{-1} y_{1}\nonumber\]

    where \(y_{1}\) denotes the vector formed from the first \(n\) components of \(Uy\). (In practice, we would not bother to find \(R^{-1}\) explicitly. Instead, taking advantage of the upper-triangular structure of \(R\), we would solve the system of equations \(R \hat{x} = y_{1}\) by back substitution, starting from the last equation.)

    The above way of solving a least-squares problem (proposed by Householder in 1958, but sometimes referred to as Golub's algorithm) is numerically preferable in most cases to solving the "normal equations" in the form \(\hat{x}=\left(A^{\prime} A\right)^{-1} A^{\prime} y\), and is essentially what Matlab does when you write \(\hat{x}=A \backslash y\). An (oversimplified!) explanation of the trouble with the normal equation solution is that it implicitly evaluates the product \(\left(R^{\prime} R\right)^{-1} R^{\prime}\), whereas the Householder/Golub method recognizes that this product simply equals \(R^{-1}\), and thereby avoids unnecessary and error prone steps.

    Exercise 3.2

    Suppose the input sequence \(\left\{u_{j}\right\}\) and the output sequence \(\left\{y_{j}\right\}\) of a particular system are related by

    \[y_{k}=\sum_{i=1}^{n} h_{i} u_{k-i}\nonumber\]

    where all quantities are scalar.

    (i) Assume we want to have \(y_{n}\) equal to some specified number \(\bar{y}\). Determine \(u_{0}+\ldots+u_{n-1}\) so as to achieve this while minimizing \(u_{0}^{2}+\ldots+u_{n-1}^{2}\).

    (ii) Suppose now that we are willing to relax our objective of exactly attaining \(y_{n}=\bar{y}\). This leads us to the following modified problem. Determine \(u_{0}+\ldots+u_{n-1}\) so as to minimize

    \[r\left(\bar{y}-y_{n}\right)^{2}+u_{0}^{2}+\ldots+u_{n-1}^{2}\nonumber\]

    where r is a positive weighting parameter.

    (a) Solve the modified problem.

    (b) What do you expect the answer to be in the limiting cases of \(r = 0\) and \(r = \infty\)? Show that your answer in (a) indeed gives you these expected limiting results.

    Exercise 3.3

    Return to the problem considered in Example 3.4. Suppose that, in addition to requiring \(p(T ) = y\) for a specified \(y\), we also want \(\dot{p}(T ) = 0\). In other words, we want to bring the mass to rest at the position \(y\) at time \(T\). Of all the force functions \(x(t)\) that can accomplish this, determine the one that minimizes \(<x(t), x(t)>=\int_{0}^{T} x^{2}(t) d t\).

    Exercise 3.4

    (a) Given \(y=A^{\prime} x\), with \(A^{\prime}\) of full row rank, find the solution vector \(x\) for which \(x^{\prime}Wx\) is minimum, where \(W = L^{\prime}L\) and \(L\) is nonsingular (i.e. where \(W\) is Hermitian and positive definite).

    (b) A specified current \(I_{0}\) is to be sent through the fixed voltage source \(V_{0}\) in the figure. Find what values \(v_{1}\), \(v_{2}\), \(v_{3}\) and \(v_{4}\) must take so that the total power dissipation in the resistors is minimized.


    This page titled 3.4: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mohammed Dahleh, Munther A. Dahleh, and George Verghese (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.