Computational Linear Algebra Notes

Vector: An ordered tuple of real numbers represented as [x<em>1,x</em>2,,x<em>n][x<em>1, x</em>2, …, x<em>n] where each element x</em>ix</em>i belongs to the set of real numbers R\mathbb{R}. Vectors can represent points in space or directions and can have various dimensions based on the context, ranging from 1D to n-dimensional spaces.

  • Code Example:

import numpy as np
vector = np.array([1, 2, 3])  # 3D vector

Vector Space: A vector space is a collection of vectors that can be scaled and added together, containing all possible combinations of vectors of a specified length nn with real elements (Rn\mathbb{R}^n). For a set to be considered a vector space, it must satisfy certain axioms including closure under addition and scalar multiplication, the existence of a zero vector, and the presence of additive inverses.

  • Code Example:

from sympy import Matrix
v1 = Matrix([1, 2])  # Vector v1
v2 = Matrix([3, 4])  # Vector v2
v_sum = v1 + v2  # Vector addition

Vector Operations

  • Scalar Multiplication: This operation involves multiplying a vector by a scalar, affecting its magnitude but not its direction. The formula is given by ax=[ax<em>1,ax</em>2,,axn]a\mathbf{x} = [ax<em>1, ax</em>2, …, ax_n], where aa is a scalar.

  • Code Example:

scalar = 2
scaled_vector = scalar * vector  # Scalar multiplication
  • Vector Addition: Combining two vectors results in a new vector. The addition is performed element-wise: x+y=[x<em>1+y</em>1,x<em>2+y</em>2,,x<em>d+y</em>d]\mathbf{x} + \mathbf{y} = [x<em>1 + y</em>1, x<em>2 + y</em>2, …, x<em>d + y</em>d], where conforming dimensions are crucial for validity.

  • Code Example:

v_sum = v1 + v2  # Vector addition using sympy
  • Norm: The norm of a vector measures its length in the vector space. It is represented as x:RnR0|\mathbf{x}| : \mathbb{R}^n \rightarrow \mathbb{R}_{\geq 0}, providing insight into the distance of the vector from the origin in the Euclidean space.

  • Code Example:

norm_v = np.linalg.norm(vector)  # Euclidean norm
  • Inner Product: This operation quantifies the angle between two vectors, providing information about their orthogonality and correlation. The inner product for two vectors is typically denoted as x,y=xy=(x<em>iy</em>i)\langle \mathbf{x}, \mathbf{y} \rangle = \mathbf{x} \cdot \mathbf{y} = \sum (x<em>i y</em>i).

  • Code Example:

inner_product = np.dot(v1, v2)  # Inner product (dot product)

P-Norms (Minkowski Norms)

  • General form: The pp-norm for a vector x\mathbf{x} is defined as x<em>p=(x</em>ip)1p|\mathbf{x}|<em>p = \left(\sum |x</em>i|^p\right)^{\frac{1}{p}}, where pp is a positive real number. This encompasses a variety of norms depending on the choice of pp.

  • Euclidean Norm (L2L_2): This is the most commonly used norm in standard geometry, defined as the square root of the sum of the squares of the vector components.

  • Code Example:

euclidean_norm = np.linalg.norm(vector)  # L2 norm
  • Taxicab/Manhattan Norm (L1L_1): This norm sums the absolute values of the vector components.

  • Code Example:

manhattan_norm = np.sum(np.abs(vector))  # L1 norm
  • Infinity Norm (LL_\infty): This norm captures the maximum absolute value among the vector's components.

  • Code Example:

infinity_norm = np.max(np.abs(vector))  # L∞ norm

Unit Vectors and Normalization

  • Unit Vector: A unit vector is defined as a vector of length one, indicated as u\mathbf{u}.

  • Normalization: The process of converting a vector into a unit vector is known as normalization, achieved by dividing the vector by its norm: 1x\frac{1}{|\mathbf{x}|}.

  • Code Example:

unit_vector = vector / norm_v  # Normalizing the vector

Inner Products

  • Dot Product: For real-valued vectors in the n-dimensional space Rn\mathbb{R}^n, the dot product is expressed as xy=x<em>iy</em>i\mathbf{x} \cdot \mathbf{y} = \sum x<em>i y</em>i. The dot product yields a scalar value.

  • Code Example:

dot_product = np.dot(v1, v2)  # Dot product calculation
  • Cosine Distance: Defined as cosθ=xyx×y\cos \theta = \frac{\mathbf{x} \cdot \mathbf{y}}{|\mathbf{x}| \times |\mathbf{y}|}.

  • Code Example:

cosine_similarity = dot_product / (np.linalg.norm(v1) * np.linalg.norm(v2))  # Cosine similarity calculation

Vector Statistics

  • Mean Vector: The mean vector represents the average of all vector components across a dataset, calculated as mean(x<em>1,x</em>2,,x<em>n)=1Nx</em>i\text{mean}(x<em>1, x</em>2, …, x<em>n) = \frac{1}{N}\sum x</em>i.

  • Code Example:

mean_vector = np.mean(np.array([v1, v2]), axis=0)  # Mean vector of a list of vectors

High-Dimensional Vector Spaces

  • Volume and Sparsity: The volume of high-dimensional spaces increases exponentially with the number of dimensions

Paradoxes

  • Concentration of Mass: In hypercubes of high dimensions, most of the volume is concentrated at the corners.

  • Surface Proximity: In high-dimensional hyperspheres, most points are found near the surface.

  • Distance Normalization: Distances between random vectors in high-dimensional spaces tend to converge.

Matrices and Linear Operators

  • Matrices: Defined as 2D arrays composed of rows and columns.

  • Code Example:

matrix = np.array([[1, 2], [3, 4]])  # Defining a matrix

Matrix Operations

  • Addition: Matrices can be added together element-wise when they share the same dimensions.

  • Code Example:

C = A + B  # Matrix addition
  • Scalar Multiplication: A matrix can be multiplied by a scalar, adjusting all elements uniformly.

  • Code Example:

C = s * A  # Scalar multiplication of a matrix
  • Transposition: This operation involves rearranging a matrix by exchanging its rows and columns.

  • Code Example:

B = A.T  # Transposing a matrix
  • Matrix-Vector Multiplication: Represented as y=Ax\mathbf{y}=\mathbf{A}\mathbf{x}.

  • Code Example:

y = np.dot(matrix, vector)  # Matrix-vector multiplication
  • Multiplication: The multiplication of matrices is defined if the number of columns in the first matrix equals the number of rows in the second.

  • Code Example:

C = np.dot(A, B)  # Matrix multiplication

Special Matrices:

  • Identity Matrix: Denoted as II, has 1s on the diagonal and 0s elsewhere.

Covariance Matrices

  • The covariance matrix quantifies how data points vary together across multiple dimensions.

  • Code Example:

covariance_matrix = np.cov(data, rowvar=False)  # Covariance matrix computation

Anatomy of a Matrix

  • Diagonal Entries: The diagonal entries of a matrix often serve as landmarks.

  • Diagonal Matrices: A special case of matrices containing nonzero elements only on the main diagonal.

Special Matrix Forms

  • Identity Matrix: A unique form of square matrix with 1s on the diagonal and 0s elsewhere.

  • Zero Matrix: A matrix in which all elements are zero.

  • Square Matrix: A matrix where the number of rows equals the number of columns.

  • Triangular Matrix: Comprises non-zero elements either above (upper triangular) or below (lower triangular) the diagonal.