Skip to main content
Engineering LibreTexts

26.3: General \(n \times n\) Systems

  • Page ID
    83074
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Now let us consider a general \(n \times n\) linear system. We will again use a systematic, two-step approach: Gaussian elimination and back substitution:

    This time, we will pay particular attention to the operation count required for each step. In addition, we will use the graphical representation shown in Figure \(26.4\) to facilitate the discussion. In the figure, the blue represents (in general) a nonzero entry, the white represents a zero entry, the red square represents the pivot, the orange squares identify the working rows, the shaded regions represents the rows or columns of pivots already processed, and the unshaded regions represents the rows and columns of pivots not yet processed.

    As before, the first step of Gaussian elimination identifies the first equation (eqn 1) as the pivot equation and eliminates the first coefficient (column 1) of the eqn 2 through eqn \(n\). To each such row, we add the appropriately scaled (determined by the ratio of the first coefficient of the row and the pivot) pivot row. We must scale (i.e. multiply) and add \(n\) coefficients, so the elimination of the first coefficient requires \(2 n\) operations per row. Since there are \(n-1\) rows to work on, the \[\begin{aligned} & \operatorname{STEP~1:~} \underset{n \times n}{A} \underset{n \times 1}{u}=\underset{n \times 1}{f} \rightarrow \underset{n \times n}{U} \underset{n \times 1}{u}=\underset{n \times 1}{\hat{f}} \end{aligned}\] STEP 2: \[\begin{aligned} & U u=\hat{f} \quad \Rightarrow \quad u \end{aligned}\] total operation count for the elimination of column 1 of eqn 2 through eqn \(n\) is \(2 n(n-1) \approx 2 n^{2}\). Figure 26.4(b) illustrates the elimination process working on the fourth row. Figure \(26.4(\mathrm{c})\) shows the partially processed matrix with zeros in the first column: \(U\)-to-be after the first step, i.e. \(\tilde{U}(k=1)\).

    In the second step, we identify the second equation as the pivot equation. The elimination of column 2 of eqn 3 through eqn \(n\) requires addition of an \((n-1)\)-vector - an appropriately scaled version of the pivot row of \(\tilde{U}(k=1)\) - from the given row. Since there are \(n-2\) rows to work on, the total operation count for the elimination of column 2 of eqn 3 through eqn \(n\) is \(2(n-1)(n-2) \approx 2(n-1)^{2}\). Note that the work required for the elimination of the second coefficient in this second step is lower than the work required for the elimination of the first coefficient in the first step because 1) we do not alter the first row (i.e. there is one less row from which to eliminate the coefficient) and 2) the first coefficient of all working rows have already been set to zero. In other word, we are working on the lower \((n-1) \times(n-1)\) sub-block of the original matrix, eliminating the first coefficient of the sub-block. This sub-block interpretation of the elimination process is clear from Figures 26.4(c) and 26.4(d); because the first pivot has already been processed, we only need to work on the unshaded area of the matrix.

    In general, on the \(k^{\text {th }}\) step of Gaussian elimination, we use the \(k^{\text {th }}\) row to remove the \(k^{\text {th }}\) coefficient of eqn \(k+1\) through eqn \(n\), working on the \((n-k+1) \times(n-k+1)\) sub-block. Thus, the operation count for the step is \(2(n-k+1)\). Summing the work required for the first to the \(n^{\text {th }}\) step, the total operation count for Gaussian elimination is \[2 n^{2}+2(n-1)^{2}+\cdots+2 \cdot 3^{2}+2 \cdot 2^{2} \approx \sum_{k=1}^{n} 2 k^{2} \approx \frac{2}{3} n^{3} \text { FLOPs }\] Note that the cost of Gaussian elimination grows quite rapidly with the size of the problem: as the third power of \(n\). The upper-triangular final matrix, \(U=\tilde{U}(k=n)\), is shown in Figure 26.4(f).

    During the Gaussian elimination process, we must also construct the modified right-hand side \(\hat{f}\). In eliminating the first coefficient, we modify the right-hand side of eqn 2 through eqn \(n(n-1\) equations), each requiring two operations for multiplication and addition, resulting in \(2(n-1) \approx 2 n\) total operations. In general, the \(k^{\text {th }}\) step of Gaussian elimination requires modification of the \((n-k)\)-sub-vector on the right-hand side. Thus, the total operation count for the construction of the right-hand side is \[2 n+2(n-1)+\cdots+2 \cdot 3+2 \cdot 2 \approx \sum_{k=1}^{n} 2 k \approx n^{2} \text { FLOPs }\] As the cost for constructing the modified right-hand side scales as \(n^{2}\), it becomes insignificant compared to \(2 n^{3} / 3\) operations required for the matrix manipulation for a large \(n\). Thus, we conclude that the total cost of Gaussian elimination, including the construction of the modified right-hand side, is \(2 n^{3} / 3\).

    Now let us consider the operation count of back substitution. Recall that the \(n \times n\) upper triangular system takes the form \[\left(\begin{array}{ccccc} U_{11} & U_{12} & \cdots & \cdots & U_{1 n} \\ & U_{22} & & & U_{2 n} \\ & \ddots & & \vdots \\ & 0 & & U_{n-1 n-1} & U_{n-1 n} \\ & & & U_{n n} \end{array}\right)\left(\begin{array}{c} u_{1} \\ u_{2} \\ \vdots \\ u_{n-1} \\ u_{n} \end{array}\right)=\left(\begin{array}{c} \hat{f}_{1} \\ \hat{f}_{2} \\ \vdots \\ \hat{f}_{n-1} \\ \hat{f}_{n} \end{array}\right) .\] We proceed to solve for the unknowns \(u_{1}, \ldots, u_{n}\) starting from the last unknown \(u_{n}\) using the \(n^{\text {th }}\) equation and sequentially solving for \(u_{n-1}, \ldots, u_{1}\) in that order. Schematically, the solution process takes the form \[\begin{array}{rlr} \text { eqn } n \text { : } & U_{n n} u_{n}-\hat{f}_{n} \Rightarrow u_{n}=\frac{\hat{f}_{n}}{U_{n n}} \\ \text { eqn } n-1: & U_{n-1 n-1} u_{n-1}+U_{n-1 n} u_{n}=\hat{f}_{n-1} \\ & \Downarrow \\ \vdots & U_{n-1 n-1} u_{n-1}=\hat{f}_{n-1}-U_{n-1 n-1} u_{n-1} \Rightarrow u_{n-1} \\ \text { eqn 1: } & U_{11} u_{1}+U_{12} u_{2}+\cdots+U_{1 n} u_{n}=\hat{f}_{1} \\ & \Downarrow \\ U_{11} u_{1}=\hat{f}_{1}-U_{12} u_{2}-\cdots-U_{1 n} u_{n} & \Rightarrow u_{1} . \end{array}\] Solving for \(u_{n}\) requires one operation. Solving for \(u_{n-1}\) requires one multiplication-subtraction pair (two operations) and one division. In general, solving for \(u_{k}\) requires \((n-k)\) multiplicationsubtraction pairs ( \(2(n-k)\) operations) and one division. Summing all the operations, the total operation count for back substitution is \[1+(1+2)+(1+2 \cdot 2)+\cdots+(1+2(n-1)) \approx \sum_{k=1}^{N} 2 k \approx n^{2} \text { FLOPs } .\] Note that the cost for the back substitution step scales as the second power of the problem size \(n\); thus, the cost of back substitution becomes negligible compared to that of Gaussian elimination for a large \(n\).


    26.3: General \(n \times n\) Systems is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?