Skip to main content
Engineering LibreTexts

4.23: Untitled Page 78

  • Page ID
    18211
  • Chapter 4

    (4‐117)

    while the general element c is given by

    ij

    (4‐118)

    In Eq. 4‐116 we see that an m n matrix can multiply an n p to produce an m p , and we see that the matrix multiplication represented by AB is only defined when the number of columns in A is equal to the number of rows in B .

    The matrix multiplication illustrated in Eq. 4‐114 conforms to this rule since there are three columns in the matrix of mass fractions and three rows in the column matrix of mass flow rates. The configuration illustrated in Eq. 4‐114 is extremely common since it is associated with the solution of n equations for n unknowns.

    Our generic representation for this type of matrix equation is given by Au  b

    (4‐119)

    in which A is a square matrix, u is a column matrix of unknowns, and b is a column matrix of knowns. These matrices are represented explicitly by

     11

    a

    12

    a

    ......

    1

    a n

     1

    u

     1

    b

     

     

    a

    a

    ...... a

    .

    .

    21

    22

    2 n

    A

     

    ,

    u

       ,

    b

      

    (4‐120)

    ....

    ....

    ......

    ....

    .

    .

     

     

    a 1

    n

    n

    a 2 ......

    n

    a n

    n

    u

    n

    b

    Sometimes the coefficients in A depend on the unknowns, u, and the matrix equation may be bi‐linear as indicated in Eqs. 4‐91.

    The transpose of a matrix is defined in the same manner as the transpose of an array that was discussed in Sec. 2.5, thus the transpose of the matrix A is constructed by interchanging the rows and columns to obtain

     11

    a

    a 21 ... a 1

    m

     11

    a

    12

    a

    ......

    1

    a n

    12

    a

    a 22 ...

    m

    a 2

    a

    a

    ...... a

    21

    22

    2 n

    T

    A

     

    ,

    A

      .

    .

    ...

    .

    (4‐121)

    .

    .

    ......

    .

    .

    .

    ...

    .

    a

     1

    m

    m

    a 2 ......

    m

    a n

     1

    a n a 2 n ...

    m

    a n

    Here it is important to note that A is an m n matrix while T

    A is an n m

    matrix.

    Multicomponent systems

    138

    4.9.1 Inverse of a square matrix

    In order to solve Eq. 4‐119, one cannot “divide” by A to determine the unknown, u, since matrix division is not defined. There is, however, a related operation involving the inverse of a matrix. The inverse of a matrix, A , is another

    matrix,

    1

    A

    , such that the product of A and

    1

    A

    is given by

    1

    A A

     I

    (4‐122)

    in which I is the identity matrix. Identity matrices have ones in the diagonal elements and zeros in the off‐diagonal elements as illustrated by the following 4  4 matrix:

    1 0 0 0

    0 1 0 0

    I 

    (4‐123)

    0 0 1 0

    0 0 0 1

    For the inverse of a matrix to exist, the matrix must be a square matrix, i.e., the number of rows must be equal to the number of columns. In addition, the determinant of the matrix must be different from zero. Thus for Eq. 4‐122 to be valid we require that the determinant of A be different from zero, i.e., A

     0

    (4‐124)

    This type of requirement plays an important role in the derivation of the pivot theorem (see Sec. 6.4) that forms the basis for one of the key developments in Chapter 6.

    As an example of the use of the inverse of a matrix, we consider the following set of four equations containing four unknowns:

    (4‐125)

    In compact notation, we would represent these equations according to Au

     b

    (4‐126)

    139