Ch 2.1 - Matrix Operations

2.1 Matrix Operations: Definitions

  • Matrix Structure: An m \times n matrix possesses m rows and n columns.

  • Entry Notation: The scalar entry situated in the i-th row and j-th column of a matrix A is denoted by a_{ij}, referred to as the (i, j)-entry of A.

    • Example: The (3, 2)-entry (a_{32}) is the number found in the third row and second column.

  • Columns as Vectors: Each column of an m \times n matrix A is an ordered list of m real numbers, which identifies a vector in \mathbb{R}^m .

    • Matrix Representation by Columns: A = [a1 \ a2 \ \dots \ an], where aj represents the j-th column vector.

    • Entry-Column Relationship: The entry a{ij} is specifically the i-th entry (from the top) of the j-th column vector aj.

  • Diagonal Entries: For an m \times n matrix A = [a{ij}], the diagonal entries are a{11}, a{22}, a{33}, \dots, forming the main diagonal of A.

  • Diagonal Matrix: A square n \times n matrix where all non-diagonal entries are zero.

    • Identity Matrix: A prime example of a diagonal matrix, denoted by I.

  • Zero Matrix: An m \times n matrix where all entries are zero, represented by 0. Its size is typically inferred from context.

  • Figure 1: Illustrates matrix notation, showing rows, columns, and the (i, j)-entry.

Sums and Scalar Multiples of Matrices

  • Extension of Vector Arithmetic: The arithmetic operations defined for vectors naturally extend to matrices.

  • Matrix Equality: Two matrices are considered equal if:

    1. They have the same size (i.e., identical numbers of rows and columns).

    2. Their corresponding columns are equal, which implies their corresponding entries are equal.

  • Matrix Sum (A+B):

    • Definition: If A and B are m \times n matrices, their sum A+B is an m \times n matrix.

    • Column-wise Summation: The columns of A+B are the sums of the corresponding columns in A and B.

    • Entry-wise Summation: Each entry in A+B is the sum of the corresponding entries in A and B.

    • Condition for Definition: A+B is defined exclusively when A and B have identical sizes.

    • Example 1: [A = \begin{bmatrix} 4 & 0 \ 1 & 5 \end{bmatrix}, B = \begin{bmatrix} 5 & 1 \ 3 & 2 \end{bmatrix}, C = \begin{bmatrix} 1 \ 2 \end{bmatrix}]
      A+B = \begin{bmatrix} 4+5 & 0+1 \ 1+3 & 5+2 \end{bmatrix} = \begin{bmatrix} 9 & 1 \ 4 & 7 \end{bmatrix}
      A+C is not defined because A is 2 \times 2 and C is 2 \times 1, meaning they have different sizes.

  • Scalar Multiple (rA):

    • Definition: If r is a scalar and A is a matrix, the scalar multiple rA is the matrix whose columns are r times the corresponding columns in A.

    • Notation: -A signifies (-1)A.

    • Subtraction: A-B is equivalent to A+(-1)B.

    • Example 2: For matrices A and B from Example 1:
      2B = 2 \begin{bmatrix} 5 & 1 \ 3 & 2 \end{bmatrix} = \begin{bmatrix} 10 & 2 \ 6 & 4 \end{bmatrix}
      A-2B = \begin{bmatrix} 4 & 0 \ 1 & 5 \end{bmatrix} - \begin{bmatrix} 10 & 2 \ 6 & 4 \end{bmatrix} = \begin{bmatrix} 4-10 & 0-2 \ 1-6 & 5-4 \end{bmatrix} = \begin{bmatrix} -6 & -2 \ -5 & 1 \end{bmatrix}

  • Theorem 1: Properties of Matrix Addition and Scalar Multiplication Let A, B, and C be matrices of the same size, and let r and s be scalars.

    • a. Commutative Law of Addition: A+B = B+A

    • b. Associative Law of Addition: (A+B)+C = A+(B+C)

    • c. Additive Identity: A+0=A

    • d. Distributive Law: r(A+B) = rA+rB

    • e. Distributive Law: (r+s)A = rA+sA

    • f. Associative Law of Scalar Multiplication: r(sA) = (rs)A

  • Verification of Theorem 1: Each property is verified by demonstrating that the matrices on both sides of the equality have the same size and that their corresponding columns (or entries) are equal. This relies on the analogous properties of vector addition and scalar multiplication.

    • Associative Property Implication: Due to associativity, sums of three or more matrices can be written without parentheses (e.g., A+B+C) as the order of grouping does not affect the result.

Matrix Multiplication

  • Composition of Mappings: Matrix multiplication fundamentally represents a composition of linear transformations.

    • If a vector x is transformed by matrix B into Bx, and then Bx is transformed by matrix A into A(Bx), the goal is to represent this composite mapping as a single matrix multiplication, that is, A(Bx) = (AB)x.

    • Figures 2 & 3: Visually depict this transformation sequence.

  • Definition of Matrix Multiplication (Column-wise):

    • If A is an m \times n matrix and B is an n \times p matrix with columns b1, \dots, bp, then the product AB is the m \times p matrix whose columns are Ab1, \dots, Abp.

    • Mathematically: AB = A[b1 \ b2 \ \dots \ bp] = [Ab1 \ Ab2 \ \dots \ Abp].

    • This definition ensures that the equation A(Bx) = (AB)x holds true for all vectors x in \mathbb{R}^p .

    • Significance: This implies that the composite mapping is itself a linear transformation, and AB is its standard matrix.

  • Example 3: Computing AB
    Given [A = \begin{bmatrix} 2 & 3 \ 1 & -5 \end{bmatrix}, B = \begin{bmatrix} 4 & 3 & 6 \ 1 & -2 & 3 \end{bmatrix}]
    Let's break B into columns: b1 = \begin{bmatrix} 4 \ 1 \end{bmatrix}, b2 = \begin{bmatrix} 3 \ -2 \end{bmatrix}, b3 = \begin{bmatrix} 6 \ 3 \end{bmatrix} Compute each column of AB: Ab1 = \begin{bmatrix} 2 & 3 \ 1 & -5 \end{bmatrix} \begin{bmatrix} 4 \ 1 \end{bmatrix} = \begin{bmatrix} 2(4)+3(1) \ 1(4)+(-5)(1) \end{bmatrix} = \begin{bmatrix} 11 \ -1 \end{bmatrix}
    Ab2 = \begin{bmatrix} 2 & 3 \ 1 & -5 \end{bmatrix} \begin{bmatrix} 3 \ -2 \end{bmatrix} = \begin{bmatrix} 2(3)+3(-2) \ 1(3)+(-5)(-2) \end{bmatrix} = \begin{bmatrix} 0 \ 13 \end{bmatrix} Ab3 = \begin{bmatrix} 2 & 3 \ 1 & -5 \end{bmatrix} \begin{bmatrix} 6 \ 3 \end{bmatrix} = \begin{bmatrix} 2(6)+3(3) \ 1(6)+(-5)(3) \end{bmatrix} = \begin{bmatrix} 21 \ -9 \end{bmatrix}
    Thus, AB = [Ab1 \ Ab2 \ Ab_3] = \begin{bmatrix} 11 & 0 & 21 \ -1 & 13 & -9 \end{bmatrix}.

  • Column Insight: Each column of AB is a linear combination of the columns of A, with the weights provided by the corresponding column of B.

  • Dimension Compatibility (Crucial Rule):

    • For Abj to be defined, the number of columns of A must match the number of rows in each column vector bj of B (i.e., the number of rows of B).

    • If A is m \times n and B is n \times p, then AB is an m \times p matrix. The inner dimensions (n and n) must match; the outer dimensions (m and p) determine the size of the product.

  • Example 4: Matrix Sizes

    • If A is 3 \times 5 and B is 5 \times 2:

      • Product AB is defined because the number of columns in A (5) matches the number of rows in B (5). The resulting matrix AB is 3 \times 2.

      • Product BA is not defined because the number of columns in B (2) does not match the number of rows in A (3).

  • Row-Column Rule for Computing AB (Entry-wise Calculation):

    • This rule provides an efficient method for calculating individual entries of AB by hand, especially for smaller problems.

    • Rule: The entry in row i and column j of AB (denoted as (AB)_{ij}) is the sum of the products of corresponding entries from row i of A and column j of B.

    • Formula: If A is an m \times n matrix, then (AB){ij} = a{i1}b{1j} + a{i2}b{2j} + \dots + a{in}b_{nj}.

    • Verification: This rule is derived directly from the row-vector rule for computing Ax, applied to the j-th column of AB (Ab_j).

  • Example 5: Using Row-Column Rule (from Example 3) Given [A = \begin{bmatrix} 2 & 3 \ 1 & -5 \end{bmatrix}, B = \begin{bmatrix} 4 & 3 & 6 \ 1 & -2 & 3 \end{bmatrix}]

    • Entry in row 1, column 3 of AB: (Row 1 of A) \cdot (Column 3 of B)
      2(6) + 3(3) = 12 + 9 = 21

    • Entry in row 2, column 2 of AB: (Row 2 of A) \cdot (Column 2 of B)
      1(3) + (-5)(-2) = 3 + 10 = 13

  • Example 6: Finding a Specific Row of AB Given [A = \begin{bmatrix} 2 & -5 & 0 \ -1 & 3 & -4 \end{bmatrix}, B = \begin{bmatrix} 4 & -6 & 1 \ 7 & 1 & 3 \ 2 & -3 & 4 \end{bmatrix}] To find the second row of AB, use Row 2 of A and multiply it by each column of B:

    • (AB)_{21} = (-1)(4) + 3(7) + (-4)(2) = -4 + 21 - 8 = 9

    • (AB)_{22} = (-1)(-6) + 3(1) + (-4)(-3) = 6 + 3 + 12 = 21

    • (AB)_{23} = (-1)(1) + 3(3) + (-4)(4) = -1 + 9 - 16 = -8
      The second row of AB is [9 \ 21 \ -8].

    • General Rule: rowi(AB) = rowi(A) \cdot B

Properties of Matrix Multiplication

  • Theorem 2: Properties of Matrix Multiplication Let A be an m \times n matrix, and let B and C have sizes for which the indicated sums and products are defined.

    • a. Associative Law of Multiplication: A(BC) = (AB)C

      • Proof: This property stems from the fact that matrix multiplication corresponds to the composition of linear transformations, and the composition of functions is associative. An alternative proof uses the column definition of matrix multiplication.

    • b. Left Distributive Law: A(B+C) = AB+AC

    • c. Right Distributive Law: (B+C)A = BA+CA

    • d. Scalar Multiplication Associativity: r(AB) = (rA)B = A(rB) for any scalar r

    • e. Identity for Matrix Multiplication: Im A = A = A In

      • I_m represents the m \times m identity matrix.

      • Recall that I_m x = x for all x \in \mathbb{R}^m .

  • Parentheses in Matrix Expressions: The associative and distributive laws allow for the flexible placement and removal of parentheses in matrix expressions, similar to real number algebra.

    • Order Preservation: However, the left-to-right order of matrices in a product must always be preserved because matrix multiplication is generally not commutative.

  • Non-Commutativity (Warning 1):

    • In general, AB \ne BA.

    • Reason: Columns of AB are linear combinations of columns of A, while columns of BA are linear combinations of columns of B.

    • Terminology: When AB = BA, matrices A and B are said to commute with one another.

    • Order Emphasis: A is