Skip to main content
Engineering LibreTexts

27.2: Matrix-Vector Multiplications

  • Page ID
    55705
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    To introduce the concept of sparse operations, let us first consider multiplication of a \(n \times n\) sparse matrix with a (dense) \(n\)-vector. Recall that matrix-vector multiplication may be interpreted rowwise or column-wise. In row-wise interpretation, we consider the task of computing \(w=A v\) as performing \(n\) inner products, one for each entry of \(w\), i.e. \[w_{i}=\left(\begin{array}{llll} A_{i 1} & A_{i 2} & \ldots & A_{i n} \end{array}\right)\left(\begin{array}{c} v_{1} \\ v_{2} \\ \vdots \\ v_{n} \end{array}\right), \quad i=1, \ldots, n\] If the matrix \(A\) is dense, the \(n\) inner products of \(n\)-vectors requires \(n \cdot(2 n)=2 n^{2}\) FLOPs. However, if the matrix \(A\) is sparse, then each row of \(A\) contains few nonzero entries; thus, we may skip a large number of trivial multiplications in our inner products. In particular, the operation count for the inner product of a sparse \(n\)-vector with a dense \(n\)-vector is equal to twice the number of nonzero entries in the sparse vector. Thus, the operation count for the entire matrix-vector multiplication is equal to twice the number of nonzero entries in the matrix, i.e. \(2 \cdot \operatorname{nnz}(A)\), where \(\operatorname{nnz}(A)\) is the number of nonzero entries in \(A\). This agrees with our intuition because the matrix-vector product requires simply visiting each nonzero entry of \(A\), identifying the appropriate multiplier in \(v\) based on the column index, and adding the product to the appropriate entry of \(w\) based on the row index.

    Now let us consider a column interpretation of matrix-vector multiplication. In this case, we interpret \(w=A v\) as \[\left(\begin{array}{c} w_{1} \\ w_{2} \\ \vdots \\ w_{n} \end{array}\right)=v_{1}\left(\begin{array}{c} A_{11} \\ A_{21} \\ \vdots \\ A_{n 1} \end{array}\right)+v_{2}\left(\begin{array}{c} A_{12} \\ A_{22} \\ \vdots \\ A_{n 2} \end{array}\right)+\cdots+v_{n}\left(\begin{array}{c} A_{1 n} \\ A_{2 n} \\ \vdots \\ A_{n n} \end{array}\right) .\] If \(A\) is sparse then, each column of \(A\) contains few nonzero entries. Thus, for each column we simply need to scale these few nonzero entries by the appropriate entry of \(v\) and augment the corresponding entries of \(w\); the operation count is twice the number of nonzero entries in the column. Repeating the operation for all columns of \(A\), the operation count for the entire matrix-vector multiplication is again \(2 \cdot \mathrm{nnz}(A)\).

    Because the number of nonzero entries in a sparse matrix is (by definition) \(\mathcal{O}(n)\), the operation count for sparse matrix-vector product is \(2 \cdot \operatorname{nnz}(A) \sim \mathcal{O}(n)\). For example, for a banded matrix with a bandwidth \(m_{\mathrm{b}}\), the operation count is at most \(2 n\left(2 m_{\mathrm{b}}+1\right)\). Thus, we achieve a significant reduction in the operation count compared to dense matrix-vector multiplication, which requires \(2 n^{2}\) operations.


    This page titled 27.2: Matrix-Vector Multiplications is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Masayuki Yano, James Douglass Penn, George Konidaris, & Anthony T Patera (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.