Vectors in Cartesian Coordinates: Components, Orthogonality, and 3D Decomposition
Coordinate Systems and Notation
The speaker introduces x, y, z and the corresponding basis vectors $\hat{\mathbf{i}}$, $\hat{\mathbf{j}}$, $\hat{\mathbf{k}}$.
Other possible components mentioned: theta and pi (suggesting angular coordinates), and the contrast between Cartesian and polar coordinate systems.
The idea of components can be framed as direction plus magnitude. In this context, the components are written as coordinates of the system we build around (Cartesian grid).
We typically use ordered triplets to denote coordinates, e.g., a=(a<em>x,a</em>y,az). The vectors $\hat{\mathbf{i}}$, $\hat{\mathbf{j}}$, $\hat{\mathbf{k}}$ form a basis for 3D space in Cartesian coordinates.
Orientation of the axes and the “coordinates” discussion ties to how we describe motion, rotation, and vectors in space.
The speaker notes that we will not continually live in this schematic space; rotations and other operations will move us away from a fixed origin, but we still decompose into base components.
Terms introduced: Cartesian grid, polar coordinates, and ordered triplets as ways to represent the same geometric object in different coordinate systems.
Orthogonality and Unit Vectors
The axes are mutually perpendicular: the angle between any two distinct axes is 90 degrees.
Common synonyms for a right angle: right angle, normal, orthogonal, perpendicular.
Orthogonality is key because orthogonal (perpendicular) components allow us to ignore coupling between dimensions and treat each dimension independently.
This is why we decompose vectors into orthogonal components: it simplifies analysis, especially when rotations come into play.
The unit vectors form an orthonormal basis: they are unit length and mutually orthogonal.
Orthogonality example in the transcript: i^⋅j^=0 (the dot product of different basis vectors is zero).
Properties of unit vectors:
Each unit vector has magnitude 1: ∥i^∥=∥j^∥=∥k^∥=1.
Each unit vector points along its respective axis: $\hat{\mathbf{i}}$ along the x-axis, $\hat{\mathbf{j}}$ along the y-axis, $\hat{\mathbf{k}}$ along the z-axis.
Vector Components in a Cartesian Basis
Any vector a can be written as a sum of its axis-aligned components: a=a<em>xi^+a</em>yj^+azk^.
Here, $ax$, $ay$, $a_z$ are scalar components (magnitudes) along the x, y, and z directions, respectively.
The sign of each component indicates direction along the axis: positive components point in the positive axis direction; negative components point in the opposite direction.
The x-component example: if the x-projection is positive, it points along +x; if negative, along -x; the magnitude is the length of the projection along that axis.
The y-component is built similarly, typically by going straight up from the projection on the x-axis and placing $a_y$ along +y (or −y if negative).
Concept of projection:
The projection of a onto the x-axis has length $ax$ and direction given by the x-axis unit vector, i.e., the vector projection along x is $ax \hat{\mathbf{i}}$.
The projection along y is $a_y \hat{\mathbf{j}}$, etc.
How many ways to express a as a sum of two vectors?
There are infinitely many pairs (b,c) such that a=b+c; however, most pairs will not align with the orthogonal basis in a convenient way.
When you choose orthogonal components, you can separate contributions along each axis cleanly: a<em>x=b</em>x+c<em>x, a</em>y=b<em>y+c</em>y, a<em>z=b</em>z+cz.
Example intuition given in the transcript:
Suppose a has components along x and y; you can think of a as the sum of a vector b that carries negative x and some y, and a vector c that carries positive x and some (or all) of the y component needed to reach a. The exact choice of b and c is not unique; what matters for simplification is that their axes are aligned with the standard basis so we can add components easily.
Vector Addition, Resultant, and Component Decomposition
Vector addition in component form:
If a=b+c then the components add component-wise:
\begin{cases}
ax = bx + cx, \
ay = by + cy, \
az = bz + c_z.
\end{cases}
The geometric interpretation of vector addition:
Place the tail of c at the head of b (the tail-to-head rule).
The resulting vector is the vector from the tail of/beginning of b to the head of c .
This is the “resulting vector” in the transcript’s phrasing.
This notation (a=b+c) is convenient in physics because it mirrors how we physically add contributions from different directions, and because orthogonal components can be treated independently.
Separation into axis components allows straightforward reconstruction:
You can group all x-components together and all y-components together to analyze their contributions separately, then recombine to obtain the full vector.
Explicit component-wise reconstruction:
Suppose you know $ax$ and $ay$ (and $az$ if in 3D); the full vector is recovered by the sum of its axis projections: a=(a</em>x)i^+(a<em>y)j^+(a</em>z)k^.
Three-Dimensional Orientation and Axis Conventions
The transcript discusses how to represent the z-axis, including orientation conventions:
If we visualize the x and y axes on a plane, the z-axis can be considered to come out of the page or go into the page.
Positive z is conventionally drawn as a dot with a circle around it (a vector coming toward you).
A vector along +z has a nonzero z-component ($a_z > 0$) and zero x and y components in the Cartesian basis.
The negative z direction would correspond to the opposite convention (into the page).
In 3D, we use three mutually orthogonal axes (x, y, z) with their respective unit vectors $\hat{\mathbf{i}}$, $\hat{\mathbf{j}}$, $\hat{\mathbf{k}}$ to describe any vector.
The importance of dimensionality:
The discussion acknowledges the need for three components to describe a vector in 3D space, especially when discussing rotations and 3D orientation.
Practical Formulas and Conventions (Summary)
Vector decomposition in Cartesian basis: a=a<em>xi^+a</em>yj^+azk^.
Components are scalars representing the projection lengths along each axis.
Unit vectors and orthonormal basis: i^:∥i^∥=1,j^:∥j^∥=1,k^:∥k^∥=1.
Magnitude of a general vector: ∥a∥=a<em>x2+a</em>y2+az2.
Projection of a vector onto an axis:
Scalar component: ax=a⋅i^.
Vector projection along x: axi^.
Dot-product intuition for axis directions:
The dot product with a basis vector isolates the corresponding component.
Tail-to-head addition and resultant:
If a=b+c , the resultant is the vector from the tail of b to the head of c .
3D orientation conventions for z:
Positive z is typically drawn as a dot (out of the page);
Negative z is drawn as a cross (into the page).
Polar vs Cartesian context (brief):
Theta and phi may appear in polar or spherical coordinates as alternative coordinates for describing direction and magnitude; Cartesian (x, y, z) provides a basis that makes orthogonal decomposition straightforward.
Practical scaling example (from the scene):
A measurement analogy mentioned: using a scale where 1 mm corresponds to 10 pounds; this illustrates how one might physically gauge projection or angles in a lab setting.
Connections, Examples, and Practical Implications
Foundational principle: Orthogonality enables independent treatment of each spatial dimension, which is essential for rotation analyses and many physics problems.
Relation to rotations: Understanding how to express vectors in a fixed orthonormal basis (i, j, k) prepares you for rotating coordinate systems and applying transformation matrices.
Real-world relevance: In physics, engineering, computer graphics, and robotics, decomposing forces, velocities, or displacements into Cartesian components simplifies problem-solving and enables modular design.
Conceptual takeaway: While you can express a vector as a sum of any two vectors (b+c), only sums aligned with the orthonormal basis give you simple, interpretable components along x, y, and z. This is why we prefer axis-aligned decompositions for analysis.
Quick Worked-Concept (Illustrative)
Example: Let a have components $ax = 3$, $ay = -2$, $a_z = 5$.
Write the vector as a=3i^−2j^+5k^.
Its magnitude is ∥a∥=32+(−2)2+52=9+4+25=38.
Conceptual note: Different decompositions into two non-orthogonal vectors can also yield the same a but they are not as convenient for analysis as the orthogonal, axis-aligned decomposition.
Philosophical and Practical Implications
Ethically/Philosophically: The method reflects a broader scientific principle: simplify complex systems by choosing a frame of reference in which components decouple; this often reveals underlying structure and facilitates reproducible analysis.
Practically: Mastery of vector components, orthogonality, and tail-to-head addition is foundational to linear algebra, kinematics, dynamics, and computer graphics.
Computational tip: Always identify an orthonormal basis (like $\hat{\mathbf{i}}$, $\hat{\mathbf{j}}$, $\hat{\mathbf{k}}$) to express vectors; this makes projections, rotations, and linear combinations straightforward.
Key Takeaways
Cartesian coordinates express vectors as sums of axis-aligned components with unit basis vectors.
Orthogonality makes component-wise analysis valid and simplifies calculations, especially for rotations and projections.
A vector in 3D can be written as a=a<em>xi^+a</em>yj^+azk^.
Projections and the tail-to-head rule are central to understanding vector addition and decomposition.
Although multiple decompositions into two vectors exist, the orthogonal decomposition into axis components is the most practical for analysis and problem-solving.