Unit 4 Vectors and Matrices: Directed Quantities, Parametric Motion, and Linear Maps

Vectors in Two Dimensions

A vector is a quantity that has both magnitude (size) and direction. This is different from a scalar (like temperature), which only has size. In precalculus, vectors are a powerful language for describing movement (displacement), forces, velocities, and any situation where “how much” and “which way” both matter.

Representing vectors: geometric vs. component form

Geometrically, you can draw a vector as an arrow. The arrow’s length represents magnitude, and the way the arrow points represents direction. A key idea is that a vector is defined by its length and direction—not by where you draw it. So you can “slide” a vector anywhere in the plane without changing it.

Algebraically, in two dimensions you usually represent a vector using components, which tell you how much the vector moves in the horizontal and vertical directions. If a vector moves right by a units and up by b units, you can write it as:

\langle a, b \rangle

You’ll also see vectors written as a column (especially when working with matrices):

[a]
[b]

Both notations represent the same vector; they just fit different contexts.

Notation reference (common equivalents)
IdeaAngle-bracket formColumn form
Vector with components⟨a, b⟩[a; b] (written as a 2x1 column)
Negative vector⟨-a, -b⟩[-a; -b]
Scalar multiplek⟨a, b⟩ = ⟨ka, kb⟩k[a; b] = [ka; kb]

(For the column form, the semicolon is just a way to show “new row” in plain text.)

Why component form matters

Component form makes vector operations simple and systematic—especially when you later connect vectors to matrices and transformations. Instead of reasoning with pictures every time, you can compute with coordinates.

Vector addition and subtraction

Vector addition corresponds to combining movements: if you walk 3 units east then 2 units north, your overall displacement is the sum.

If:

\vec{u} = \langle u_x, u_y \rangle

and

\vec{v} = \langle v_x, v_y \rangle

then:

\vec{u} + \vec{v} = \langle u_x + v_x, u_y + v_y \rangle

Subtraction is “add the opposite”:

\vec{u} - \vec{v} = \langle u_x - v_x, u_y - v_y \rangle

What can go wrong: A common mistake is to add magnitudes instead of components. You only add magnitudes if the vectors point in exactly the same direction.

Worked example: displacement

You move 5 units right and 1 unit down, then 2 units left and 6 units up.

Let:

\vec{u} = \langle 5, -1 \rangle

\vec{v} = \langle -2, 6 \rangle

Then:

\vec{u} + \vec{v} = \langle 5 + (-2), -1 + 6 \rangle = \langle 3, 5 \rangle

So the net displacement is 3 right and 5 up.

Scalar multiplication

Scalar multiplication stretches or shrinks a vector (and can reverse direction if the scalar is negative). If k is a scalar and \vec{v} = \langle v_x, v_y \rangle, then:

k\vec{v} = \langle kv_x, kv_y \rangle

This matters because many real situations scale vectors: doubling a velocity, halving a force, reversing a direction, and so on.

Worked example: scaling and reversing

If:

\vec{v} = \langle 4, -3 \rangle

then:

-2\vec{v} = \langle -8, 6 \rangle

The factor 2 doubles the length; the negative sign reverses direction.

Magnitude (length) and unit vectors

The magnitude of \langle a, b \rangle comes from the Pythagorean Theorem:

|\langle a, b \rangle| = \sqrt{a^2 + b^2}

A unit vector has magnitude 1 and represents pure direction. To turn a nonzero vector into a unit vector, divide by its magnitude:

\text{unit in direction of } \langle a, b \rangle = \frac{1}{\sqrt{a^2+b^2}}\langle a, b \rangle

What can go wrong: Forgetting to divide both components by the magnitude, or accidentally dividing by a^2+b^2 instead of \sqrt{a^2+b^2}.

Worked example: magnitude and direction

Find the magnitude of \langle 6, 8 \rangle and a unit vector in the same direction.

Magnitude:

|\langle 6, 8 \rangle| = \sqrt{6^2+8^2} = \sqrt{36+64} = \sqrt{100} = 10

Unit vector:

\frac{1}{10}\langle 6, 8 \rangle = \langle 0.6, 0.8 \rangle

Direction angle (when needed)

Sometimes you describe direction using an angle \theta measured from the positive x-axis. For a vector \langle a, b \rangle with a \ne 0, the tangent relationship is:

\tan(\theta) = \frac{b}{a}

What can go wrong: Using arctangent without checking the quadrant. For example, \langle -1, -1 \rangle points into Quadrant III, not Quadrant I.

Exam Focus
  • Typical question patterns
    • Compute a resultant displacement from multiple moves (add/subtract vectors in component form).
    • Find magnitude or a unit vector given components.
    • Interpret a vector in a context (velocity, displacement) and describe what scaling or negating means.
  • Common mistakes
    • Adding/subtracting magnitudes instead of components.
    • Dropping the sign of a component (especially with “down” or “left”).
    • Forgetting that a negative scalar reverses direction.

Vector-Valued Functions

A vector-valued function outputs a vector instead of a single number. In 2D, it typically looks like:

\vec{r}(t) = \langle x(t), y(t) \rangle

You can think of it as describing a point moving in the plane over time: at each input value t, the function gives a position vector.

Why vector-valued functions matter

They connect algebra to motion and geometry. Instead of describing a path as a single equation y=f(x), you describe both coordinates together. This is especially useful when:

  • motion is easier to describe in time,
  • the curve fails the vertical line test (like a circle),
  • you want to model trajectories, animations, or repeated transformations.

Evaluating and interpreting

If:

\vec{r}(t) = \langle x(t), y(t) \rangle

then evaluating at t=a gives:

\vec{r}(a) = \langle x(a), y(a) \rangle

This is simply the position at that input.

Common interpretation:

  • x(t) is horizontal position.
  • y(t) is vertical position.
  • The parameter t is often time (but it could be any parameter).

Parametric curves: connecting to graphs

The graph of \vec{r}(t) is the set of points \big(x(t), y(t)\big) as t runs over a domain. You’re not plotting t on an axis; instead, t controls where you are on the curve.

What can go wrong: Mixing up the meaning of the parameter and trying to graph t directly on the coordinate plane.

Basic operations with vector-valued functions

Vector-valued functions add and scale component-by-component, just like vectors:

\vec{r}(t) + \vec{s}(t) = \langle x_r(t)+x_s(t), y_r(t)+y_s(t) \rangle

k\vec{r}(t) = \langle kx_r(t), ky_r(t) \rangle

This matters because many models combine motions: for example, “wind velocity + plane velocity.”

Motion viewpoint: position and displacement over time

If \vec{r}(t) represents position, then the displacement from time t=a to t=b is:

\vec{r}(b) - \vec{r}(a)

Average velocity over that interval is displacement divided by elapsed time:

\text{average velocity} = \frac{\vec{r}(b)-\vec{r}(a)}{b-a}

Even if you do not compute derivatives in this unit, this average-rate-of-change idea is a natural bridge from functions to motion.

Worked example: evaluating and displacement

Let:

\vec{r}(t) = \langle 2t-1, t^2 \rangle

1) Position at t=3:

\vec{r}(3) = \langle 2(3)-1, 3^2 \rangle = \langle 5, 9 \rangle

2) Displacement from t=1 to t=3:

\vec{r}(1) = \langle 2(1)-1, 1^2 \rangle = \langle 1, 1 \rangle

\vec{r}(3)-\vec{r}(1) = \langle 5-1, 9-1 \rangle = \langle 4, 8 \rangle

So the net change in position is 4 right and 8 up.

Eliminating the parameter (when possible)

Sometimes you can rewrite a parametric curve in a single equation by eliminating t. This can help you recognize the curve.

Worked example: recognize a parabola

Suppose:

x(t) = t

y(t) = t^2 - 4

Since x=t, substitute into y:

y = x^2 - 4

So the parametric curve traces the parabola y=x^2-4.

What can go wrong: Assuming every parametric curve can be rewritten as a nice single equation. Sometimes elimination is messy or not possible in an elementary way.

Exam Focus
  • Typical question patterns
    • Evaluate a vector-valued function at given parameter values and interpret as position.
    • Compute displacement or average change over an interval using \vec{r}(b)-\vec{r}(a).
    • Identify or sketch the path traced as the parameter changes (often with a small table of values).
  • Common mistakes
    • Treating \vec{r}(t) like a scalar function and forgetting it outputs two coordinates.
    • Swapping components (plotting y(t) on the x-axis by accident).
    • Ignoring parameter direction (increasing t tells you how the curve is traced).

Matrices and Matrix Operations

A matrix is a rectangular array of numbers that organizes information. In this unit, matrices matter mainly because they provide a compact way to represent and compute with linear transformations and with functions that take vectors as inputs.

What a matrix represents (big picture)

In many AP Precalculus applications, a matrix acts like a machine:

  • Input: a vector (often a 2D point written as a column).
  • Output: a new vector.

That “machine” viewpoint becomes precise when you learn matrix multiplication.

Matrix basics: size and entries

A matrix has a dimension (also called size) m \times n meaning m rows and n columns.

Example (shown in plain text to avoid formatting issues):

A = [ 1  4  -2
      0  3   5 ]

This is a 2x3 matrix.

Matrix addition and scalar multiplication

You can only add matrices of the same dimension. Addition is entry-by-entry:

[ a  b ]   [ e  f ]   [ a+e  b+f ]
[ c  d ] + [ g  h ] = [ c+g  d+h ]

Scalar multiplication multiplies every entry by the scalar:

k[ a  b ] = [ ka  kb ]
 [ c  d ]   [ kc  kd ]

Why it matters: These operations let you build new transformations from old ones (for instance, blending two transformation rules or scaling their effect).

Matrix multiplication: the rule that makes matrices powerful

Matrix multiplication is not entry-by-entry. It is defined to capture function composition for linear transformations.

If A is m \times n and B is n \times p, then AB is m \times p.

To compute an entry in AB, you take a row from A and a column from B and compute a dot-like sum of products.

Worked example: multiplying two 2x2 matrices

Let:

A = [ 1  2 ]
    [ 3  4 ]

B = [ 5  0 ]
    [ -1 2 ]

Compute AB:

  • Top-left entry: (row1 of A)·(col1 of B) = 1·5 + 2·(-1) = 5 - 2 = 3
  • Top-right entry: (row1 of A)·(col2 of B) = 1·0 + 2·2 = 4
  • Bottom-left: 3·5 + 4·(-1) = 15 - 4 = 11
  • Bottom-right: 3·0 + 4·2 = 8

So:

AB = [ 3  4 ]
     [ 11 8 ]

What can go wrong (very common):

  • Multiplying matrices in the wrong order. In general, AB \ne BA.
  • Not checking dimensions: the inside dimensions must match.

Identity matrix and zero matrix (as operation references)

The identity matrix acts like “do nothing” for multiplication. For 2x2:

I = [ 1 0 ]
    [ 0 1 ]

If dimensions match, then I\vec{x} = \vec{x} and AI = IA = A.

The zero matrix has all entries 0 and acts like an additive identity: A + 0 = A.

Matrix-vector multiplication

This is the key bridge to transformations.

If:

A = [ a  b ]
    [ c  d ]

and a vector is written as a column:

v = [ x ]
    [ y ]

then:

Av = [ ax + by ]
     [ cx + dy ]

This tells you exactly how the matrix transforms the point \langle x, y \rangle.

Worked example: applying a matrix to a vector

Let:

A = [ 2  -1 ]
    [ 0   3 ]

v = [ 4 ]
    [ 1 ]

Then:

Av = [ 2·4 + (-1)·1 ] = [ 7 ]
     [ 0·4 + 3·1     ]   [ 3 ]

So the output vector is \langle 7, 3 \rangle.

Exam Focus
  • Typical question patterns
    • Perform matrix operations (add, scale, multiply) and interpret what the result means.
    • Multiply a matrix by a vector to get a new point or new vector.
    • Determine whether a product is defined by checking dimensions.
  • Common mistakes
    • Doing entry-by-entry multiplication instead of row-by-column.
    • Forgetting that order matters (computing BA when asked for AB).
    • Dimension mismatches (trying to multiply matrices that cannot be multiplied).

Linear Transformations and Matrices

A linear transformation in two dimensions is a function that takes vectors as inputs and outputs vectors, preserving two key structures: straight lines remain straight, and the origin stays fixed (among other properties). In this course, the most important takeaway is that many familiar geometric changes—stretching, reflecting, rotating—can be represented using matrices.

Why transformations matter

Transformations connect algebra (matrices) to geometry (how shapes move). This is useful for:

  • computer graphics (moving and rotating objects),
  • physics (changing coordinate systems),
  • repeated processes (iterating a transformation multiple times),
  • understanding composition of functions in a new setting.

The “columns tell the story” principle

For a 2x2 matrix:

A = [ a  b ]
    [ c  d ]

Think of its columns as the images of the basic unit vectors:

  • The first column is where \langle 1, 0 \rangle goes.
  • The second column is where \langle 0, 1 \rangle goes.

Reason: any vector \langle x, y \rangle can be built from those unit directions:

\langle x, y \rangle = x\langle 1, 0 \rangle + y\langle 0, 1 \rangle

When you apply a linear transformation (matrix) to that combination, it transforms each unit direction and keeps the same coefficients x and y.

What can go wrong: Students often try to guess a transformation from a matrix by looking only at diagonal entries. That works for pure scalings, but not for shears/rotations where off-diagonal entries matter.

Common transformations and their matrices

Below are standard 2D transformations written in plain text.

Scaling

Scale horizontally by factor s_x and vertically by factor s_y:

[ s_x  0  ]
[ 0    s_y]

Applied to \langle x, y \rangle, it becomes \langle s_x x, s_y y \rangle.

Reflection

Reflect across the x-axis:

[ 1  0 ]
[ 0 -1 ]

Reflect across the y-axis:

[ -1 0 ]
[ 0  1 ]
Shear

A horizontal shear (pushes points sideways depending on their y-value):

[ 1  k ]
[ 0  1 ]

Then \langle x, y \rangle maps to \langle x + ky, y \rangle.

Rotation

A rotation by angle \theta counterclockwise about the origin is:

[ cos(θ)  -sin(θ) ]
[ sin(θ)   cos(θ) ]

Even if you do not memorize every matrix, recognizing that matrices can encode rotations is important.

Transforming shapes: what happens to a figure?

A linear transformation maps every point of a shape to a new location. To transform a polygon, you can transform its vertices and connect the transformed points in the same order.

Key geometric effects to watch:

  • Length and angle changes: Not all linear transformations preserve distances/angles.
  • Parallel lines stay parallel under linear transformations.
  • The origin stays fixed because A\vec{0} = \vec{0}.
Worked example: transform a triangle

Suppose triangle vertices are:

P = \langle 1, 0 \rangle

Q = \langle 2, 1 \rangle

R = \langle 0, 2 \rangle

Apply:

A = [ 2  0 ]
    [ 1  1 ]

Compute each image using matrix-vector multiplication:

For P (as column [1;0]):

AP = [ 2·1 + 0·0 ] = [ 2 ]
     [ 1·1 + 1·0 ]   [ 1 ]

So P' = \langle 2, 1 \rangle.

For Q (column [2;1]):

AQ = [ 2·2 + 0·1 ] = [ 4 ]
     [ 1·2 + 1·1 ]   [ 3 ]

So Q' = \langle 4, 3 \rangle.

For R (column [0;2]):

AR = [ 2·0 + 0·2 ] = [ 0 ]
     [ 1·0 + 1·2 ]   [ 2 ]

So R' = \langle 0, 2 \rangle.

Now plot P', Q', R' and connect them to get the transformed triangle.

Composition of transformations

If you apply transformation B first, then transformation A, the combined transformation is AB (matrix multiplication). This is one of the most important “why” reasons matrix multiplication is defined the way it is: it matches function composition.

What can go wrong: The order reverses compared to the wording. “First B, then A” corresponds to AB.

Exam Focus
  • Typical question patterns
    • Apply a given matrix to points/vectors and describe the geometric effect.
    • Determine the image of the unit square or a polygon under a transformation.
    • Build a composite transformation by multiplying matrices in the correct order.
  • Common mistakes
    • Reversing matrix order when composing transformations.
    • Assuming all transformations preserve distances (only certain ones do, like pure rotations/reflections).
    • Forgetting to transform every vertex (transforming only one point is not enough to map a whole shape).

Matrices as Functions

Once you see a matrix as a transformation machine, it’s natural to treat it as a function. In this view, the input is a vector, and the output is a vector.

Defining a matrix function

A typical matrix function in two dimensions is:

f(\vec{x}) = A\vec{x}

where A is a fixed 2x2 matrix and \vec{x} is a 2D input vector (often written as a column).

This parallels familiar function notation like f(x)=mx+b, except now the “input” is a vector and the “multiplier” is a matrix.

Why treat matrices like functions?

Thinking of matrices as functions helps you:

  • use function language (domain, range, composition, inverse) in a new setting,
  • model real processes where the “state” has multiple components (like a 2D position or two-category population),
  • understand repeated application (iteration), like applying the same transformation over and over.

Composition works like function composition

If:

f(\vec{x}) = A\vec{x}

and

g(\vec{x}) = B\vec{x}

then:

f(g(\vec{x})) = A(B\vec{x}) = (AB)\vec{x}

So composing matrix functions corresponds exactly to multiplying matrices.

Worked example: composition

Let:

A = [ 1  1 ]
    [ 0  1 ]

B = [ 2  0 ]
    [ 0  3 ]

Interpretation:

  • B scales x by 2 and y by 3.
  • A shears horizontally: \langle x, y \rangle becomes \langle x+y, y \rangle.

Compute the composite “first B then A”:

AB = [ 1  1 ] [ 2  0 ] = [ 1·2+1·0   1·0+1·3 ] = [ 2  3 ]
     [ 0  1 ] [ 0  3 ]   [ 0·2+1·0   0·0+1·3 ]   [ 0  3 ]

So the combined matrix is:

AB = [ 2  3 ]
     [ 0  3 ]

That means the composite function is \vec{x} \mapsto (AB)\vec{x}.

What can go wrong: If you compute BA instead, you are modeling “first A then B,” which is a different function.

Inverse idea (when it exists)

In function language, an inverse “undoes” a function. For matrices, an inverse matrix (when it exists) undoes the transformation.

If a matrix A has an inverse, denoted A^{-1}, then:

A^{-1}(A\vec{x}) = \vec{x}

and:

A(A^{-1}\vec{x}) = \vec{x}

Not every matrix has an inverse. Geometrically, a matrix fails to be invertible if it collapses the plane onto a line or a point (so information is lost and you can’t uniquely recover the original input).

Practical takeaway: If different input vectors can map to the same output vector, you cannot define a true inverse function.

Iteration: applying the same matrix repeatedly

Sometimes a process applies the same linear rule again and again. If:

\vec{x}_{n+1} = A\vec{x}_n

then:

\vec{x}_2 = A\vec{x}_1 = A(A\vec{x}_0) = A^2\vec{x}_0

and in general:

\vec{x}_n = A^n\vec{x}_0

This shows how matrix powers represent repeated applications of the same transformation.

Worked example: two-step iteration

Let:

A = [ 0  1 ]
    [ 1  0 ]

This swaps coordinates: \langle x, y \rangle becomes \langle y, x \rangle.

Apply it twice:

  • After one application: \langle x, y \rangle \mapsto \langle y, x \rangle
  • After two applications: \langle y, x \rangle \mapsto \langle x, y \rangle

So applying the function twice returns you to the original vector, meaning A^2 acts like the identity transformation.

Real-world modeling connection (simple but important)

Matrices can model systems where the next state depends linearly on the current state—for example, moving between two categories each step (like simplified migration between two regions, or shifting percentages between two groups). The key is that the “state” is a vector and the update rule is multiplication by a matrix.

Even when the context changes, the math skill is the same: interpret the matrix entries as “how much of each component contributes to each new component.”

Exam Focus
  • Typical question patterns
    • Treat \vec{x} \mapsto A\vec{x} as a function: evaluate it at given vectors and interpret the output.
    • Compose matrix functions and match the composition to the correct matrix product.
    • Analyze repeated application (using A^n\vec{x}_0 conceptually or with small values of n).
  • Common mistakes
    • Confusing A\vec{x} (a function output) with A + \vec{x} (not generally defined).
    • Reversing composition order: writing the wrong product for “first” and “then.”
    • Assuming every matrix transformation can be undone (not all matrices are invertible).