Ch 2.2 - The Inverse of a Matrix

Understanding Matrix Operations and Efficiency

  • Equality of Transposed Products:

    • The quantities (Ax)T(Ax)^T and xTATx^T A^T are equal, as indicated by Theorem 3(d).

  • Scalar Product (Dot Product) vs. Outer Product:

    • For a column vector x=[[x<em>1],[x</em>2]]x = [[x<em>1], [x</em>2]]:

      • The scalar product xTx=[x<em>1 x</em>2][[x<em>1],[x</em>2]]=[x<em>12+x</em>22]x^T x = [x<em>1 \ x</em>2] [[x<em>1], [x</em>2]] = [x<em>1^2 + x</em>2^2]. This results in a 1×11 \times 1 matrix, usually written without brackets (a scalar).

      • Example from transcript (assuming x=[5;3]x=[5; 3] for calculation of 34): If x=[[5],[3]]x = [[5], [3]], then xTx=[5 3][[5],[3]]=[25+9]=34x^T x = [5 \ 3] [[5], [3]] = [25+9] = 34.

      • The outer product xxT=[[x<em>1],[x</em>2]][x<em>1 x</em>2]=[[x<em>12,x</em>1x<em>2],[x</em>2x<em>1,x</em>22]]x x^T = [[x<em>1], [x</em>2]] [x<em>1 \ x</em>2] = [[x<em>1^2, x</em>1 x<em>2], [x</em>2 x<em>1, x</em>2^2]].

      • Example from transcript (assuming x=[3;5]x=[3; 5] for calculation of matrix): If x=[[3],[5]]x = [[3], [5]], then xxT=[[3],[5]][3 5]=[[9,15],[15,25]]x x^T = [[3], [5]] [3 \ 5] = [[9, 15], [15, 25]].

  • Matrix Product Definition and Undefined Operations:

    • The product ATxTA^T x^T is not defined if the number of columns in ATA^T does not match the number of rows in xTx^T. For instance, if ATA^T is m×2m \times 2 and xx is a column vector, xTx^T would be a row vector. For the product to be defined, xTx^T would need to have 2 rows, implying xx is a 1x2 row vector not a column vector which is contradictory to typical conventions for xTxx^T x. The exact dimensions of ATA^T and xx that lead to this undefined product were not explicitly stated beyond x does not have two rows to match the two columns of A^T.

  • Computational Efficiency of Matrix Multiplication:

    • When computing A2xA^2 x, it is generally more efficient to compute it as A(Ax)A(Ax).

    • Comparison for a 4×44 \times 4 matrix A and 4×14 \times 1 vector x:

      • Method 1: A(Ax)A(Ax).

        • Computing AxAx requires 4×4=164 \times 4 = 16 multiplications (4 for each of the 4 entries in the resulting vector).

        • Computing A(Ax)A(Ax) then requires another 1616 multiplications.

        • Total: 16+16=3216 + 16 = 32 multiplications.

      • Method 2: A2xA^2 x.

        • Computing A2=AAA^2 = A \cdot A requires 43=644^3 = 64 multiplications (4 for each of the 16 entries in the 4×44 \times 4 matrix A2A^2).

        • Computing A2xA^2 x then requires another 1616 multiplications.

        • Total: 64+16=8064 + 16 = 80 multiplications.

    • Conclusion: Computing A(Ax)A(Ax) is significantly faster than computing A2xA^2 x.

  • Properties of Matrix Products (AB) with Identical Rows/Columns in A:

    • If all columns of matrix A are identical, then all columns of the product AB are also identical.

    • If all rows of matrix A are identical (i.e., extrow<em>i(A)=extrow</em>j(A)ext{row}<em>i(A) = ext{row}</em>j(A) for all i,ji,j), then all rows of the product AB are also identical (extrow<em>i(AB)=extrow</em>i(A)Bext{row}<em>i(AB) = ext{row}</em>i(A) \cdot B).

    • If all entries in A are the same (implying both identical rows and identical columns), then all entries in AB will also be the same.

The Inverse of a Matrix

  • Matrix Analogue of a Reciprocal:

    • The concept of a matrix inverse (A1A^{-1}) is analogous to the multiplicative inverse (reciprocal) of a non-zero real number (e.g., 51=1/55^{-1} = 1/5).

    • For real numbers, the inverse satisfies 51(5)=15^{-1}(5) = 1 and 5(51)=15(5^{-1}) = 1.

  • Key Distinctions for Matrices:

    • Two-Sided Definition: Because matrix multiplication is generally not commutative (ABBAAB \neq BA), the matrix generalization requires both equations to be satisfied: CA=ICA = I and AC=IAC = I.

    • No Division Notation: Slanted-line notation (e.g., A/BA/B) is avoided for matrices.

    • Square Matrices Only: A full generalization of the inverse is possible only for square matrices (n×nn \times n).

  • Definition of an Invertible Matrix:

    • An n×nn \times n matrix A is invertible (or nonsingular) if there exists an n×nn \times n matrix C such that CA=ICA = I and AC=IAC = I, where I=InI = I_n is the n×nn \times n identity matrix.

    • C is called an inverse of A.

    • Uniqueness of the Inverse: The inverse C is uniquely determined by A.

      • Proof: If B were another inverse of A, then B=BI=B(AC)=(BA)C=IC=CB = BI = B(AC) = (BA)C = IC = C. Thus, B=CB=C.

    • The unique inverse is denoted as A1A^{-1}, so the defining equations are A1A=IA^{-1}A = I and AA1=IAA^{-1} = I.

    • A matrix that is not invertible is called a singular matrix.

  • Example 1: Verifying an Inverse for a 2×22 \times 2 Matrix:

    • Given A=[[3,2],[7,5]]A = [[3, 2], [7, 5]] and C=[[5,2],[7,3]]C = [[5, -2], [-7, 3]].

    • CA=[[5,2],[7,3]][[3,2],[7,5]]=[[(5)(3)+(2)(7),(5)(2)+(2)(5)],[(7)(3)+(3)(7),(7)(2)+(3)(5)]]=[[1514,1010],[21+21,14+15]]=[[1,0],[0,1]]=ICA = [[5, -2], [-7, 3]] [[3, 2], [7, 5]] = [[(5)(3) + (-2)(7), (5)(2) + (-2)(5)], [(-7)(3) + (3)(7), (-7)(2) + (3)(5)]] = [[15 - 14, 10 - 10], [-21 + 21, -14 + 15]] = [[1, 0], [0, 1]] = I.

    • AC=[[3,2],[7,5]][[5,2],[7,3]]=[[(3)(5)+(2)(7),(3)(2)+(2)(3)],[(7)(5)+(5)(7),(7)(2)+(5)(3)]]=[[1514,6+6],[3535,14+15]]=[[1,0],[0,1]]=IAC = [[3, 2], [7, 5]] [[5, -2], [-7, 3]] = [[(3)(5) + (2)(-7), (3)(-2) + (2)(3)], [(7)(5) + (5)(-7), (7)(-2) + (5)(3)]] = [[15 - 14, -6 + 6], [35 - 35, -14 + 15]] = [[1, 0], [0, 1]] = I.

    • Since both conditions are met, C=A1C = A^{-1}.

Inverse of a 2×22 \times 2 Matrix

  • Theorem 4: Formula for the Inverse of a 2×22 \times 2 Matrix:

    • Let A=[[a,b],[c,d]]A = [[a, b], [c, d]].

    • If adbc0ad - bc \neq 0, then A is invertible and its inverse is given by:
      A1=1adbc[[d,b],[c,a]]A^{-1} = \frac{1}{ad - bc} [[d, -b], [-c, a]]

    • If adbc=0ad - bc = 0, then A is not invertible.

  • Determinant of a 2×22 \times 2 Matrix:

    • The quantity adbcad - bc is called the determinant of A, denoted as extdetA=adbcext{det} A = ad - bc.

    • Theorem 4 implies that a 2×22 \times 2 matrix A is invertible if and only if extdetA0ext{det} A \neq 0.

  • Example 2: Finding the Inverse of a 2×22 \times 2 Matrix:

    • Find the inverse of A=[[3,4],[5,6]]A = [[3, 4], [5, 6]].

    • First, calculate the determinant: extdetA=(3)(6)(4)(5)=1820=2ext{det} A = (3)(6) - (4)(5) = 18 - 20 = -2.

    • Since extdetA=20ext{det} A = -2 \neq 0, A is invertible.

    • Using Theorem 4:
      A1=12[[6,4],[5,3]]=[[3,2],[5/2,3/2]]A^{-1} = \frac{1}{-2} [[6, -4], [-5, 3]] = [[-3, 2], [5/2, -3/2]]

Solving Linear Systems with Invertible Matrices

  • Importance of Invertible Matrices:

    • They are essential for algebraic calculations and formula derivations in linear algebra.

    • They can provide insights into mathematical models of real-life situations.

  • Theorem 5: Unique Solution for Ax=bAx = b:

    • If A is an invertible n×nn \times n matrix, then for any vector b\mathbf{b} in RnR^n, the matrix equation Ax=bA\mathbf{x} = \mathbf{b} has a unique solution given by x=A1b\mathbf{x} = A^{-1}\mathbf{b}.

    • Proof of Existence: Substituting A1bA^{-1}\mathbf{b} for x\mathbf{x} in the equation: A(A1b)=(AA1)b=Ib=bA(A^{-1}\mathbf{b}) = (AA^{-1})\mathbf{b} = I\mathbf{b} = \mathbf{b}. Thus, A1bA^{-1}\mathbf{b} is a solution.

    • Proof of Uniqueness: Assume u\mathbf{u} is any solution such that Au=bA\mathbf{u} = \mathbf{b}. Multiplying both sides by A1A^{-1} on the left:
      A1(Au)=A1bA^{-1}(A\mathbf{u}) = A^{-1}\mathbf{b}
      (A1A)u=A1b(A^{-1}A)\mathbf{u} = A^{-1}\mathbf{b}
      Iu=A1bI\mathbf{u} = A^{-1}\mathbf{b}
      u=A1b\mathbf{u} = A^{-1}\mathbf{b} Consequently, any solution must be equal to A1bA^{-1}\mathbf{b}, proving uniqueness.

Practical Application: Elastic Beam Deflection (Example 3)

  • Scenario: A horizontal elastic beam is supported at its ends and subjected to forces at three points (1, 2, 3), causing deflections.

  • Variables:

    • fR3\mathbf{f} \in R^3: A vector listing the forces at the three points.

    • yR3\mathbf{y} \in R^3: A vector listing the amounts of deflection at the three points.

  • Hooke's Law Relationship: y=Df\mathbf{y} = D\mathbf{f}

    • D: The flexibility matrix.

    • D1D^{-1}: The stiffness matrix.

  • Physical Significance of Columns of D (Flexibility Matrix):

    • Express DD as D=DI<em>3=[De</em>1 De<em>2 De</em>3]D = D I<em>3 = [D\mathbf{e</em>1} \ D\mathbf{e<em>2} \ D\mathbf{e</em>3}], where ej\mathbf{e_j} are the standard basis vectors (columns of the identity matrix).

    • Interpreting e1\mathbf{e_1}: The vector (1,0,0)(1, 0, 0) represents a unit force applied downward at point 1, with zero force at the other two points.

    • First column of D (De1D\mathbf{e_1}): Contains the beam deflections that result from applying a unit force at point 1 (and zero force at points 2 and 3).

    • Similarly, the second and third columns of D list the deflections caused by a unit force at points 2 and 3, respectively.

  • Physical Significance of Columns of D1D^{-1} (Stiffness Matrix):

    • The inverse equation is f=D1y\mathbf{f} = D^{-1}\mathbf{y}, which computes the force vector f\mathbf{f} required to produce a given deflection vector y\mathbf{y} (i.e., this matrix describes the beam's stiffness).

    • Express D1D^{-1} as D1=[D1e<em>1 D1e</em>2 D1e3]D^{-1} = [D^{-1}\mathbf{e<em>1} \ D^{-1}\mathbf{e</em>2} \ D^{-1}\mathbf{e_3}].

    • Interpreting e1\mathbf{e_1} as a deflection vector: The vector (1,0,0)(1, 0, 0) now represents a unit deflection at point 1, with zero deflections at the other two points.

    • First column of D1D^{-1} (D1e1D^{-1}\mathbf{e_1}): Lists the forces that must be applied at the three points to produce a unit deflection at point 1 and zero deflections at points 2 and 3.

    • Similarly, columns 2 and 3 of D1D^{-1} list the forces required to produce unit deflections at points 2 and 3, respectively.

    • Note on Forces: To achieve specific deflections, some forces in these columns might be negative (indicating an upward force).

    • Units: If flexibility is measured in inches of deflection per pound of load, then stiffness matrix entries are given in pounds of load per inch of deflection.

Practicalities of Using A1A^{-1} to Solve Ax=bAx=b

  • Computational Efficiency: The formula x=A1b\mathbf{x} = A^{-1}\mathbf{b} is rarely used for numerical computations of Ax=bA\mathbf{x}=\mathbf{b} for large matrices.

    • Row reduction of the augmented matrix [A b][A \ \mathbf{b}] is almost always faster and generally more accurate (as it can minimize rounding errors).

  • Exception: The 2×22 \times 2 case can be an exception, where mental calculation of A1A^{-1} might make using the formula quicker.

  • Example 4: Solving a 2×22 \times 2 System Using A1A^{-1}

    • System:
      3x1 + 4x2 = 3 \
      5x1 + 6x2 = 7

    • This is equivalent to Ax=bA\mathbf{x} = \mathbf{b} where A=[[3,4],[5,6]]A = [[3, 4], [5, 6]] and b=[[3],[7]]\mathbf{b} = [[3], [7]].

    • From Example 2, we found A1=[[3,2],[5/2,3/2]]A^{-1} = [[-3, 2], [5/2, -3/2]].

    • Solution:
      \mathbf{x} = A^{-1}\mathbf{b} = [[-3, 2], [5/2, -3/2]] [[3], [7]] \
      = [[(-3)(3) + (2)(7)], [(5/2)(3) + (-3/2)(7)]] \
      = [[-9 + 14], [15/2 - 21/2]] \
      = [[5], [-6/2]] = [[5], [-3]]

    • So, x<em>1=5x<em>1 = 5 and x</em>2=3x</em>2 = -3.

Properties of Invertible Matrices

  • Theorem 6: Important Facts about Invertible Matrices:

    • a. Inverse of an Inverse: If A is an invertible matrix, then its inverse, A1A^{-1}, is also invertible, and (A1)1=A(A^{-1})^{-1} = A.

      • Proof: By definition, A1A=IA^{-1}A = I and AA1=IAA^{-1} = I. These equations satisfy the conditions for A to be the inverse of A1A^{-1}.

    • b. Inverse of a Product: If A and B are n×nn \times n invertible matrices, then their product AB is also invertible. The inverse of AB is the product of their inverses in reverse order: (AB)1=B1A1(AB)^{-1} = B^{-1}A^{-1}

      • Proof: We need to show that (AB)(B1A1)=I(AB)(B^{-1}A^{-1}) = I and (B1A1)(AB)=I(B^{-1}A^{-1})(AB) = I:
        (AB)(B1A1)=A(BB1)A1=AIA1=AA1=I(AB)(B^{-1}A^{-1}) = A(BB^{-1})A^{-1} = AIA^{-1} = AA^{-1} = I
        A similar calculation shows (B1A1)(AB)=I(B^{-1}A^{-1})(AB) = I.

    • c. Inverse of a Transpose: If A is an invertible matrix, then its transpose, ATA^T, is also invertible. The inverse of ATA^T is the transpose of A1A^{-1}: (AT)1=(A1)T(A^T)^{-1} = (A^{-1})^T

      • Proof: Using Theorem 3(d) (which states (XY)T=YTXT(XY)^T = Y^T X^T):
        (A1)TAT=(AA1)T=IT=I(A^{-1})^T A^T = (AA^{-1})^T = I^T = I
        And also:
        AT(A1)T=(A1A)T=IT=IA^T (A^{-1})^T = (A^{-1}A)^T = I^T = I
        Thus, ATA^T is invertible, and its inverse is (A1)T(A^{-1})^T.

  • Generalization of Theorem 6(b):

    • The product of any number of n×nn \times n invertible matrices is invertible, and its inverse is the product of their inverses in reverse order. For example, (ABC)1=C1B1A1(ABC)^{-1} = C^{-1}B^{-1}A^{-1}.

  • Role of Definitions in Proofs: Proofs rigorously demonstrate that a proposed inverse (or other property) satisfies the formal definition. For example, showing (B1A1)(B^{-1}A^{-1}) is the inverse of AB means showing it satisfies the definition of an inverse with AB.

Elementary Matrices and Computing A1A^{-1}

  • Connection to Row Operations:

    • A significant connection exists between invertible matrices and elementary row operations.

    • An invertible matrix A is row equivalent to the identity matrix InI_n.

    • This relationship provides a systematic method for finding A1A^{-1}.

  • Definition of an Elementary Matrix:

    • An elementary matrix is a matrix obtained by performing a single elementary row operation on an identity matrix (ImI_m).

    • There are three types of elementary matrices, corresponding to the three elementary row operations:

      1. Row Replacement: Adding a multiple of one row to another.

      2. Row Interchange: Swapping two rows.

      3. Row Scaling: Multiplying a row by a nonzero scalar.

  • Example 5: Elementary Matrices and Row Operations:

    • Let A=[[a,b,c],[d,e,f],[g,h,i]]A = [[a, b, c], [d, e, f], [g, h, i]].

    • E<em>1=[[1,0,0],[0,1,0],[4,0,1]]E<em>1 = [[1, 0, 0], [0, 1, 0], [-4, 0, 1]] (obtained by R</em>3R<em>34R</em>1R</em>3 \leftarrow R<em>3 - 4R</em>1 on I3I_3).

      • E<em>1AE<em>1A performs the operation R</em>3R<em>34R</em>1R</em>3 \leftarrow R<em>3 - 4R</em>1 on A.

    • E<em>2=[[0,1,0],[1,0,0],[0,0,1]]E<em>2 = [[0, 1, 0], [1, 0, 0], [0, 0, 1]] (obtained by R</em>1R<em>2R</em>1 \leftrightarrow R<em>2 on I</em>3I</em>3).

      • E<em>2AE<em>2A performs the operation R</em>1R2R</em>1 \leftrightarrow R_2 on A.

    • E<em>3=[[1,0,0],[0,1,0],[0,0,5]]E<em>3 = [[1, 0, 0], [0, 1, 0], [0, 0, 5]] (obtained by R</em>35R<em>3R</em>3 \leftarrow 5R<em>3 on I</em>3I</em>3).

      • E<em>3AE<em>3A performs the operation R</em>35R3R</em>3 \leftarrow 5R_3 on A.

  • General Fact: If an elementary row operation is performed on an m×nm \times n matrix A, the resulting matrix can be written as EAEA, where E is the m×mm \times m elementary matrix created by performing the same row operation on ImI_m.

  • Invertibility of Elementary Matrices:

    • Every elementary matrix E is invertible.

    • This is because elementary row operations are reversible (Section 1.1).

    • The inverse of an elementary matrix E is simply the elementary matrix of the same type that performs the reverse operation, transforming E back into the identity matrix I.

  • Example 6: Finding the Inverse of an Elementary Matrix:

    • Given E<em>1=[[1,0,0],[0,1,0],[4,0,1]]E<em>1 = [[1, 0, 0], [0, 1, 0], [-4, 0, 1]] (which adds -4 times row 1 to row 3 of I</em>3I</em>3).

    • To reverse this operation and transform E<em>1E<em>1 back into I</em>3I</em>3, one must add +4 times row 1 to row 3.

    • Therefore, the inverse is E11=[[1,0,0],[0,1,0],[4,0,1]]E_1^{-1} = [[1, 0, 0], [0, 1, 0], [4, 0, 1]].

The Algorithm for Finding A1A^{-1}

  • Theorem 7: Invertibility and Row Equivalence to Identity:

    • An n×nn \times n matrix A is invertible if and only if A is row equivalent to the n×nn \times n identity matrix (InI_n).

    • Furthermore, if A is invertible, any sequence of elementary row operations that reduces A to I<em>nI<em>n will also transform I</em>nI</em>n into A1A^{-1}.

  • Proof of Theorem 7:

    • Part 1: If A is invertible, then AI<em>nA \sim I<em>n (A is row equivalent to I</em>nI</em>n).

      • By Theorem 5, if A is invertible, the equation Ax=bA\mathbf{x} = \mathbf{b} has a solution for every b\mathbf{b}.

      • This implies that A has a pivot position in every row (Theorem 4 in Section 1.4).

      • Since A is a square n×nn \times n matrix, having n pivot positions in n rows means all n pivot positions must lie on the main diagonal.

      • Therefore, the reduced echelon form of A must be I<em>nI<em>n, meaning AI</em>nA \sim I</em>n.

    • Part 2: If AInA \sim I_n, then A is invertible.

      • If AI<em>nA \sim I<em>n, it means A can be transformed into I</em>nI</em>n by a sequence of elementary row operations.

      • Each elementary row operation corresponds to left-multiplication by an elementary matrix. So, there exist elementary matrices E<em>1,E</em>2,,E<em>pE<em>1, E</em>2, \ldots, E<em>p such that: E</em>pE<em>2E</em>1A=In(1)E</em>p \cdots E<em>2 E</em>1 A = I_n \quad (1)

      • Since each elementary matrix is invertible, their product (E<em>pE</em>1)(E<em>p \cdots E</em>1) is also invertible (by the generalization of Theorem 6b).

      • Let K=E<em>pE</em>1K = E<em>p \cdots E</em>1. Then KA=InKA = I_n. This directly implies that A is invertible, and its inverse is KK (since multiplying by KK on the left yields the identity, K1K^{-1} must be A).

      • More specifically, from KA=I<em>nKA = I<em>n, we multiply both sides on the left by K1K^{-1} to get A=K1A = K^{-1}. Then, (A1)1=A(A^{-1})^{-1} = A implies A1=(K1)1=K=E</em>pE1A^{-1} = (K^{-1})^{-1} = K = E</em>p \cdots E_1.

      • This means that A1A^{-1} is precisely the matrix obtained by applying the same sequence of elementary operations (E<em>1,E</em>2,,E<em>p)(E<em>1, E</em>2, \ldots, E<em>p) to I</em>nI</em>n (because E<em>pE</em>1I<em>n=E</em>pE1=K=A1E<em>p \cdots E</em>1 I<em>n = E</em>p \cdots E_1 = K = A^{-1}).

  • Algorithm for Finding A1A^{-1}:

    • To find the inverse of an n×nn \times n matrix A, form the augmented matrix [A I][A \ I] by placing A and the identity matrix InI_n side-by-side.

    • Perform row operations to reduce this augmented matrix.

    • If A is row equivalent to InI_n: The augmented matrix will transform from [A I][A \ I] to [I A1][I \ A^{-1}]. The matrix on the right side will be A1A^{-1}.

    • If A is not row equivalent to InI_n: If, during row reduction, a row of zeros appears on the left side (where A initially was), then A is not invertible, and no inverse exists.

  • Example 7: Finding the Inverse of a 3×33 \times 3 Matrix:

    • Find the inverse of A=[[1,2,1],[0,1,0],[3,0,1]]A = [[1, 2, 1], [0, 1, 0], [3, 0, 1]], if it exists.

    • Form the augmented matrix [A I][A \ I]:
      [[1,2,1,,1,0,0],[0,1,0,,0,1,0],[3,0,1,,0,0,1]][[1, 2, 1, |, 1, 0, 0], [0, 1, 0, |, 0, 1, 0], [3, 0, 1, |, 0, 0, 1]]

    • Perform row operations:

      1. R<em>3R</em>33R1R<em>3 \leftarrow R</em>3 - 3R_1:
        [[1,2,1,,1,0,0],[0,1,0,,0,1,0],[0,6,2,,3,0,1]][[1, 2, 1, |, 1, 0, 0], [0, 1, 0, |, 0, 1, 0], [0, -6, -2, |, -3, 0, 1]]

      2. R<em>3R</em>3+6R2R<em>3 \leftarrow R</em>3 + 6R_2:
        [[1,2,1,,1,0,0],[0,1,0,,0,1,0],[0,0,2,,3,6,1]][[1, 2, 1, |, 1, 0, 0], [0, 1, 0, |, 0, 1, 0], [0, 0, -2, |, -3, 6, 1]]

      3. R<em>3(1/2)R</em>3R<em>3 \leftarrow (-1/2)R</em>3:
        [[1,2,1,,1,0,0],[0,1,0,,0,1,0],[0,0,1,,3/2,3,1/2]][[1, 2, 1, |, 1, 0, 0], [0, 1, 0, |, 0, 1, 0], [0, 0, 1, |, 3/2, -3, -1/2]]

      4. R<em>1R</em>1R3R<em>1 \leftarrow R</em>1 - R_3:
        [[1,2,0,,1/2,3,1/2],[0,1,0,,0,1,0],[0,0,1,,3/2,3,1/2]][[1, 2, 0, |, -1/2, 3, 1/2], [0, 1, 0, |, 0, 1, 0], [0, 0, 1, |, 3/2, -3, -1/2]]

      5. R<em>1R</em>12R2R<em>1 \leftarrow R</em>1 - 2R_2:
        [[1,0,0,,1/2,1,1/2],[0,1,0,,0,1,0],[0,0,1,,3/2,3,1/2]][[1, 0, 0, |, -1/2, 1, 1/2], [0, 1, 0, |, 0, 1, 0], [0, 0, 1, |, 3/2, -3, -1/2]]

    • Since A is row equivalent to I, A is invertible. The inverse is:
      A1=[[1/2,1,1/2],[0,1,0],[3/2,3,1/2]]A^{-1} = [[-1/2, 1, 1/2], [0, 1, 0], [3/2, -3, -1/2]]

  • Checking the Answer:

    • It's good practice to verify the calculated inverse by checking if AA1=IAA^{-1} = I (or A1A=IA^{-1}A = I).

    • Note: If A is known to be invertible and you derive a matrix C such that AC=IAC = I, then C must be A1A^{-1}. It is not strictly necessary to also check CA=ICA=I in this context of computation, as the algorithm guarantees that if a matrix on the right emerges, it is the unique inverse.

  • Another View of Matrix Inversion (Solving Multiple Systems Simultaneously):

    • Finding A1A^{-1} by row reducing [A I][A \ I] can be viewed as simultaneously solving nn separate matrix equations:
      Ax<em>1=e</em>1, Ax<em>2=e</em>2, , Ax<em>n=e</em>nA\mathbf{x}<em>1 = \mathbf{e</em>1}, \ A\mathbf{x}<em>2 = \mathbf{e</em>2}, \ \ldots, \ A\mathbf{x}<em>n = \mathbf{e</em>n}
      where e<em>j\mathbf{e<em>j} are the columns of the identity matrix I</em>nI</em>n. The augmented columns for these systems are simply the columns of I<em>nI<em>n, forming [A e</em>1 e<em>2  e</em>n]=[A I][A \ \mathbf{e</em>1} \ \mathbf{e<em>2} \ \cdots \ \mathbf{e</em>n}] = [A \ I].

    • The property AA1=IAA^{-1} = I and the definition of matrix multiplication demonstrate that the columns of A1A^{-1} are precisely the solutions x<em>1,x</em>2,,xn\mathbf{x}<em>1, \mathbf{x}</em>2, \ldots, \mathbf{x}_n to these systems.

    • Practical Use: This perspective is valuable if an applied problem only requires finding one or two specific columns of A1A^{-1}. In such cases, only the corresponding systems Ax<em>j=e</em>jA\mathbf{x}<em>j = \mathbf{e</em>j} need to be solved, rather than computing the full inverse.