Skip to main content
Engineering LibreTexts

26.4: Gaussian Elimination and LU Factorization

  • Page ID
    83075
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In this chapter, we introduced a systematic procedure for solving a linear system using Gaussian elimination and back substitution. We interpreted Gaussian elimination as a process of triangulating the system matrix of interest; the process relied, in the \(k^{\text {th }}\) step, on adding appropriately scaled versions of the \(k^{\text {th }}\) equation to all the equations below it in order to eliminate the leading coefficient. In particular, we also modified the right-hand side of the equation in the triangulation procedure such that we are adding the same quantity to both sides of the equation and hence not affecting the solution. The end product of our triangulation process is an upper triangular matrix \(U\) and a modified right-hand side \(\hat{f}\). If we are given a new right-hand side, we would have to repeat the same procedure again (in \(\mathcal{O}\left(n^{3}\right)\) cost) to deduce the appropriate new modified right-hand side.

    It turns out that a slight modification of our Gaussian elimination procedure in fact would permit solution to the problem with a different right-hand side in \(\mathcal{O}\left(n^{2}\right)\) operations. To achieve this, instead of modifying the right-hand side in the upper triangulation process, we record the operations used in the upper triangulation process with which we generated the right-hand side. It turns out that this recording operation in fact can be done using a lower triangular matrix \(L\), such that the modified right-hand side \(\hat{f}\) is the solution to \[L \hat{f}=f,\] where \(f\) is the original right-hand side. Similar to back substitution for an upper triangular system, forward substitution enables solution to the lower triangular system in \(\mathcal{O}\left(n^{2}\right)\) operations. This lower triangular matrix \(L\) that records all operations used in transforming matrix \(A\) into \(U\) in fact is a matrix that satisfies \[A=L U .\] In other words, the matrices \(L\) and \(U\) arise from a factorization of the matrix \(A\) into lower and upper triangular matrices.

    This procedure is called \(L U\) factorization. (The fact that \(L\) and \(U\) must permit such a factorization is straightforward to see from the fact that \(U u=\hat{f}\) and \(L \hat{f}=f\); multiplication of both sides of \(U u=\hat{f}\) by \(L\) yields \(L U u=L \hat{f}=f\), and because the relationship must hold for any solution-right-hand-side pair \(\{u, f\}\) to \(A u=f\), it must be that \(L U=A\).) The factorization process is in fact identical to our Gaussian elimination and requires \(2 n^{3} / 3\) operations. Note we did compute all the pieces of the matrix \(L\) in our elimination procedure; we simply did not form the matrix for simplicity.

    In general the LU decomposition will exist if the matrix \(A\) is non-singular. There is, however, one twist: we may need to permute the rows of \(A\) - a process known as (partial) pivoting - in order to avoid a zero pivot which would prematurely terminate the process. (In fact, permutations of rows can also be advantageous to avoid small pivots which can lead to amplification of round-off errors.) If even - say in infinite precision - with row permutations we arrive at an exactly zero pivot then this is in fact demonstrates that \(A\) is singular. \({ }_{-}\)

    There are some matrices for which no pivoting is required. One such important example in mechanical engineering is SPD matrices. For an SPD matrix (which is certainly non-singular - all eigenvalues are positive) we will never arrive at a zero pivot nor we will need to permute rows to ensure stability. Note, however, that we may still wish to permute rows to improve the efficiency of the LU decomposition for sparse systems - which is the topic of the next section.


    26.4: Gaussian Elimination and LU Factorization is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?