Linear Algebra in Three Dimensions

Visual Linear Algebra Online, Section 1.6

The solution set of a certain system of three linear equations and three unknowns is the line of intersection of three planes. ‘Typically’, three planes will intersect at just one point, however.

Linear algebra has immense practical value in our modern world. Statistics, business, science, and engineering all make heavy use of the subject.

As a simple concrete example, consider a network of one-way roads as shown in the diagram below. The letters A, B, and C label the intersections while the numbers and the letters x, y, and z represent traffic flow rates. The flow rates could be the number cars per 10 minute period, for example.

A network of one-way roads. The letters A, B, and C label the intersections while the numbers and the letters x, y, and z represent traffic flow rates.

The basic principle that governs such a network is this: for any intersection, the inflow equals the outflow.

Because of this, we can conclude that x+y=20 (intersection A), 80=y+z (intersection B), and z=x+60 (intersection C).

We can rewrite these three linear equations as a linear system in the following standard form.

\begin{cases}\begin{array}{ccccccc} x & + & y & &  & = & 20 \\  & & y & + & z & = & 80 \\ x & & & - & z & = & -60 \end{array}\end{cases}

The method of elimination can then be used to solve for the unknown traffic flow rates x, y, and z. With three equations and three unknowns, this example can be thought of as a prototype arising in three-dimensional linear algebra.

However, it will actually have infinitely many solutions, which is actually not typical of an example of this type (with 3 equations and 3 unknowns).

Elementary Row Operations on the Augmented Matrix

The Augmented Matrix

As discussed in Section 1.5, “Matrices and Linear Transformations in Low Dimensions”, the method of elimination can be used on the corresponding augmented matrix. This is a rectangular array of numbers which includes the coefficients of the unknowns from the left-hand sides of the equations in the system and the constants from the right-hand sides. Once again, our system is the following.

\begin{cases}\begin{array}{ccccccc} x & + & y & &  & = & 20 \\  & & y & + & z & = & 80 \\ x & & & - & z & = & -60 \end{array}\end{cases}

The augmented matrix for this system is shown below. Notice that 0’s must be used when an unknown is missing from an equation.

\left[\begin{array}{cccc} 1 & 1 & 0 & 20 \\ 0 & 1 & 1 & 80 \\ 1 & 0 & -1 & -60 \end{array}\right]

Symbolically, the first elementary row operation we will use is -R_{1}+R_{3}\longrightarrow R_{3}. This represents replacing row 3 by -1 times row 1 added to row 3. Its purpose is to make the “1” in the lower left corner by a “0”. This represents “eliminating” the unknown x from the third equation of the system. Here is the result.

\left[\begin{array}{cccc} 1 & 1 & 0 & 20 \\ 0 & 1 & 1 & 80 \\ 1 & 0 & -1 & -60 \end{array}\right]\xrightarrow{-R_{1}+R_{3}\longrightarrow R_{3}}\left[\begin{array}{cccc} 1 & 1 & 0 & 20 \\ 0 & 1 & 1 & 80 \\ 0 & -1 & -1 & -80 \end{array}\right]

Next, eliminate the leading “-1” in the new third row by performing R_{2}+R_{3}\longrightarrow R_{3}.

\left[\begin{array}{cccc} 1 & 1 & 0 & 20 \\ 0 & 1 & 1 & 80 \\ 0 & -1 & -1 & -80 \end{array}\right]\xrightarrow{R_{2}+R_{3}\longrightarrow R_{3}}\left[\begin{array}{cccc} 1 & 1 & 0 & 20 \\ 0 & 1 & 1 & 80 \\ 0 & 0 & 0 & 0 \end{array}\right]

The last row, now consisting of just zeros, represents the equation 0=0, which is superfluous. Therefore, we can just focus on the top two rows.

Our final step is to eliminate the nonzero number above the “leading 1” in row two. This has the effect of eliminating the y from the first equation of the system.

Do this by performing the elementary row operation -R_{2}+R_{1}\longrightarrow R_{1}. Here is the result.

\left[\begin{array}{cccc} 1 & 1 & 0 & 20 \\ 0 & 1 & 1 & 80 \\ 0 & 0 & 0 & 0 \end{array}\right]\xrightarrow{-R_{2}+R_{1}\longrightarrow R_{1}} \left[\begin{array}{cccc} 1 & 0 & -1 & -60 \\ 0 & 1 & 1 & 80 \\ 0 & 0 & 0 & 0 \end{array}\right]

The Corresponding (Reduced) Equivalent System

If we focus on just the top two rows, the corresponding system of two linear equations in three unknowns is shown below. This system is a “reduced” form of the original system.

\begin{cases}\begin{array}{ccccccc} x & & & - & z & = & -60 \\ & & y & + & z & = & 80 \end{array} \end{cases}

There are infinitely many solutions of this system. If we choose a value of z, such as z=75, then x and y are determined. In this case, x=z-60=75-60=15 and y=80-z=80-75=5.

Any other value of z works as well, though some produce negative flow rates. For example, if z=40, then x=z-60=40-60=-20 and y=80-z=80-40=40. Negative flow rates should not be allowed.

On the other hand, maybe we can interpret a negative flow rate as just a way of saying that the one-way street actually goes the opposite direction from the drawing. So if x=-20, maybe that just means cars along that street are moving from intersection A towards intersection C. In that case, the problem would be with our picture — not with real life!

If our diagram is correct, however, then there are constraints on the values of x, y, and z. The value of x is maximized at x=20 when z=80 since larger values of z lead to negative values of y. And the value of y is maximized at y=20 when z=60 since smaller values of z lead to negative values of x.

Free Variables and Basic Variables

In the abstract, however, z can truly be anything. Because of this, it is often called the “free variable” of the reduced system. On the other hand, the variables x and y are called “basic variables” of the reduced system.

In fact, with respect to the order that we chose for the variables (alphabetical order), the final augmented matrix is unique. Because of this, we could actually say z is the free variable of the original system. And x and y are the basic variables of the original system.

Visualizing the Solution Set

Each of the original three linear equations has a graph, in rectangular coordinates, which is a plane in three-dimensional space. Since the solution set consists of infinitely many points, these three planes must intersect at infinitely many points.

It should not be surprising that these infinitely many points lie along a straight line. After all, planes are “flat” and there is one free variable.

Here is a picture (see Section 1.5, “Matrices and Linear Transformations in Low Dimensions” for a discussion of rectangular coordinates and graphs of linear transformations {\Bbb R}^{2}\longrightarrow {\Bbb R} in three dimensional space).

The red line is the solution set of the system with equations x+y=20, y+z=80, and x-z=-60. The solution set is the line of intersection of the three planes which are the graphs of these three linear equations. The vectors are related to the parametric vector form of the solution set (see a description of this further below).

In the picture above, x+y=20 is shaded light-orange, y+z=80 is light blue, and x-z=-60 is light green.

This picture should make sense to you. For example, in a two-dimensional xy-plane, the graph of x+y=20\Leftrightarrow y=-x+20 is a line with a slope of -1 and a y-intercept of y=20. In three-dimensional xyz-space, the value of z is irrelevant for the equation x+y=20. Therefore, its graph is the plane obtained as a union of the lines obtained by translating the line x+y=20 straight up and down in the z-direction away from the xy-plane. This plane (the light orange one) is therefore “parallel” to the z-axis.

Parametric Vector Form of the Solution Set

What are the blue and black arrows in this picture? Why, they are three-dimensional vectors, of course. In fact, we can write the solution set in parametric vector form using these three-dimensional vectors as follows.

\left\{\left[\begin{array}{c} z-60 \\ 80-z \\ z\end{array}\right]\, \biggr|\, z\in {\Bbb R}\right\}=\left\{z\left[\begin{array}{c} 1 \\ -1 \\ 1\end{array}\right]+\left[\begin{array}{c} -60 \\ 80 \\ 0\end{array}\right]\, \biggr|\, z\in {\Bbb R}\right\}

In the picture, the blue vector is \left[\begin{array}{c} -60 \\ 80 \\ 0\end{array}\right] (it lies in the horizontal xy-plane) and the black vector is a scaled up version of \left[\begin{array}{c} 1 \\ -1 \\ 1\end{array}\right]. In fact, the black vector is 50\left[\begin{array}{c} 1 \\ -1 \\ 1\end{array}\right]=\left[\begin{array}{c} 50 \\ -50 \\ 50\end{array}\right] (it is coming “out of the screen” toward your left).

An animation with rotation might help you “see” the orientation of these planes, the line, and the vectors more clearly.

Animated version of the solution set to the same system as the perspective angle changes.

Vectors in Three Dimensions

As implied by the content in the previous section, a vector in three-dimensions is an arrow sitting inside three-dimensional space. It has a length and a direction. And it doesn’t matter where its base point (“tail”) is.

A vector in {\Bbb R}^{3} can also be represented as a column vector; that is, as a 3\times 1 matrix of real numbers.

If {\bf v}=\left[\begin{array}{c} -60 \\ 80 \\ 0\end{array}\right] and {\bf w}=\left[\begin{array}{c} 1 \\ -1 \\ 1\end{array}\right], then the solution set above can be written as \{z{\bf w}+{\bf v}\, |\, z\in {\Bbb R}\}.

It is both interesting and important to note that {\bf v}=\left[\begin{array}{c} -60 \\ 80 \\ 0\end{array}\right] is one particular solution to the original system. Check this!

\begin{cases}\begin{array}{ccccccc} x & + & y & &  & = & 20 \\  & & y & + & z & = & 80 \\ x & & & - & z & = & -60 \end{array}\end{cases}

On the other hand, {\bf w}=\left[\begin{array}{c} 1 \\ -1 \\ 1\end{array}\right] does not solve the original system. Instead, it solves the corresponding homogeneous system, where the numbers on the right-hand sides are all zeros.

\begin{cases}\begin{array}{ccccccc} x & + & y & &  & = & 0 \\  & & y & + & z & = & 0 \\ x & & & - & z & = & 0 \end{array}\end{cases}

This is an important observation that should be double-checked as well.

We next describe the general rules for the algebra of vectors in three-dimensional linear algebra.

Vector Algebra and Geometric Interpretations in Three-Dimensional Space

Definitions

Vectors in {\Bbb R}^{3} are added (and subtracted) component-wise.

\left[\begin{array}{c} x_{1} \\ y_{1} \\ z_{1} \end{array}\right]+\left[\begin{array}{c} x_{2} \\ y_{2} \\ z_{2} \end{array}\right]=\left[\begin{array}{c} x_{1}+x_{2} \\ y_{1}+y_{2} \\ z_{1}+z_{2} \end{array}\right]

Scalar multiplication is also done component-wise.

c\left[\begin{array}{c} x \\ y \\ z \end{array}\right]=\left[\begin{array}{c} cx \\ cy \\ cz \end{array}\right]

Also note that the zero vector is {\bf 0}=\left[\begin{array}{c} 0 \\ 0 \\ 0 \end{array}\right] and that -{\bf v} can be interpreted as (-1){\bf v}. Technically this means that the scalar multiple (-1){\bf v} is the “additive inverse” of {\bf v}.

Properties

We now state the properties of vector addition as a theorem. This theorem is very easy to prove if we use the corresponding properties of real numbers.

These properties are essential to know, however. In fact, they should be so ingrained in your mind and soul that you can use them without thinking about it. These properties also generalize to both higher-dimensional vectors and to many other situations that we will consider in Visual Linear Algebra Online.

Theorem 1.6.1: The following algebra statements are true.

  1. (Closure under Vector Addition) {\bf u}+{\bf v}\in {\Bbb R}^{3} for all {\bf u},{\bf v}\in {\Bbb R}^{3}.
  2. (Commutativity) {\bf u}+{\bf v}={\bf v}+{\bf u} for all {\bf u},{\bf v}\in {\Bbb R}^{3}.
  3. (Associativity of Vector Addition) ({\bf u}+{\bf v})+{\bf w}={\bf u}+({\bf v}+{\bf w}) for all {\bf u},{\bf v},{\bf w} \in {\Bbb R}^{3}.
  4. (Zero is an Additive Identity) {\bf u}+{\bf 0}={\bf u} for all {\bf u}\in {\Bbb R}^{3}.
  5. (Additive Inverses) {\bf u}+(-{\bf u})={\bf 0} for all {\bf u}\in {\Bbb R}^{3}.
  6. (Closure under Scalar Multiplication) c{\bf u}\in {\Bbb R}^{3} for all c\in {\Bbb R} and for all {\bf u}\in {\Bbb R}^{3}.
  7. (Distributivity of Scalars over Vector Addition) c({\bf u}+{\bf v})=c{\bf u}+c{\bf v} for all c\in {\Bbb R} and for all {\bf u}, {\bf v}\in {\Bbb R}^{3}.
  8. (Distributivity of Vectors over Scalar Addition) (c+d){\bf u}=c{\bf u}+d{\bf u} for all c,d\in {\Bbb R} and for all {\bf u}\in {\Bbb R}^{3}.
  9. (Associativity of Scalar Multiplication) c(d{\bf u})=(cd){\bf u} for all c,d\in {\Bbb R} and for all {\bf u}\in {\Bbb R}^{3}.
  10. (Multiplication by the Scalar 1) 1{\bf u}={\bf u} for all {\bf u}\in {\Bbb R}^{3}.
Visualizing the Sum of Vectors in Three-Dimensional Space

Any two nonzero vectors in {\Bbb R}^{3} which are not parallel define a plane. If we make sure all vectors are based so that their “tails” are at the origin, the sum of those two vectors “lies” in the same plane. Geometrically, the sum is determined by the parallelogram law, as discussed in Section 1.2, “Vectors in Two-Dimensions”.

The sum of the red and blue vectors is the magenta (pink) vector, as determined by the parallelogram law.

What if we need to add more than two vectors? Use the “head-to-tail method“. To visualize {\bf u}+{\bf v}+{\bf w}, place the tail (base) of {\bf v} at the head (tip) of {\bf u}, then place the tail of {\bf w} at the head of {\bf v}. The vector that starts at the tail of {\bf u} and ends at the head of {\bf w} is {\bf u}+{\bf v}+{\bf w}. See the animation below.

The sum of the blue, red, and green vectors is the magenta (pink) vector. The blue, red, and green vectors do not lie in the same plane here.

The Dot Product and Angle Between Two Vectors

In three-dimensional linear algebra, the dot product can be defined in a way that is analogous to the two-dimensional case. The following formula is the definition. It is a sum of products of corresponding components.

{\bf v}\cdot {\bf w}=\left[\begin{array}{c} x_{1} \\ y_{1} \\ z_{1} \end{array}\right]\cdot \left[\begin{array}{c} x_{2} \\ y_{2} \\ z_{2} \end{array}\right]=x_{1}x_{2}+y_{1}y_{2}+z_{1}z_{2}

By using the Pythagorean Theorem in three-dimensional space, we can see that the magnitude (length) of any vector can be computed from the dot product. If {\bf v}=\left[\begin{array}{c} x \\ y \\ z \end{array}\right], then its magnitude is ||{\bf v}||=\sqrt{{\bf v}\cdot {\bf v}}=\sqrt{x^{2}+y^{2}+z^{2}}.

Properties of the dot product carry over from two-dimensional to three-dimensional vectors.

Theorem 1.6.2: The following properties are true for given arbitrary vectors in {\Bbb R}^{3} and scalars in {\Bbb R}.

  1. (Commutative Property) {\bf v}\cdot {\bf w}={\bf w}\cdot {\bf u}.
  2. (Distributive Property) {\bf u}\cdot ({\bf v}+{\bf w})={\bf u}\cdot {\bf v}+{\bf u}\cdot {\bf w}.
  3. (Associative Property) (\lambda {\bf v})\cdot {\bf w}=\lambda({\bf v}\cdot {\bf w})={\bf v}\cdot (\lambda {\bf w}).
  4. (Non-negativity) {\bf v}\cdot  {\bf v}\geq 0. In addition, {\bf v}\cdot {\bf v}=0 if and only if {\bf v}={\bf 0}.
  5. (Relationship to Magnitude) ||{\bf v}||=\sqrt{{\bf v}\cdot {\bf v}}.
  6. (Triangle Inequality) ||{\bf v}+{\bf w}||\leq ||{\bf v}||+||{\bf w}||.

In Section 1.2, “Vectors in Two-Dimensions”, we saw that the dot product and the angle \theta between two nonzero vectors are related. The same formula carries over to three dimensions. That relationship, in two forms, is

{\bf v}\cdot {\bf w}=||{\bf v}||\, ||{\bf w}||\, \cos(\theta)

and

\theta=\arccos\left(\frac{{\bf v}\cdot {\bf w}}{||{\bf v}||\, ||{\bf w}||}\right)=\cos^{-1}\left(\frac{{\bf v}\cdot {\bf w}}{||{\bf v}||\, ||{\bf w}||}\right).

Note that, for nonzero vectors {\bf v} and {\bf w}, we can say \theta=\frac{\pi}{2}\mbox{ radians}=90^{\circ} if and only if {\bf v}\cdot {\bf w}=0.

The Cross Product and the Creation of Perpendicular Vectors

Formula for the Cross Product

There is another kind of vector product that can be defined in three-dimensional linear algebra. It is called the cross product. As a binary operation, it is only defined for three-dimensional vectors.

Whereas the dot product of two vectors gives a scalar (number), the cross product of two three-dimensional vectors gives another vector. Here is its rather complicated formula.

{\bf v}\times {\bf w}=\left[\begin{array}{c} x_{1} \\ y_{1} \\ z_{1} \end{array}\right]\times \left[\begin{array}{c} x_{2} \\ y_{2} \\ z_{2} \end{array}\right]=\left[\begin{array}{c} y_{1}z_{2}-y_{2}z_{1} \\ -x_{1}z_{2}+x_{2}z_{1}  \\ x_{1}y_{2}-x_{2}y_{1} \end{array}\right]

Properties of the Cross Product

And here is a list of its properties.

Theorem 1.6.3: The following properties are true for given arbitrary vectors in {\Bbb R}^{3} and scalars in {\Bbb R}.

  1. (Perpendicularity) When the given vectors are nonzero, {\bf v}\times {\bf w} is perpendicular to both {\bf v} and {\bf w}.
  2. (Magnitude) If \theta is an angle between the two given nonzero vectors, then ||{\bf v}\times {\bf w}||=||{\bf v}||\, ||{\bf w}||\, |\sin(\theta)|.
  3. (Direction) When the given vectors are nonzero, the direction of {\bf v}\times {\bf w} is given by the right-hand rule (more on this below).
  4. (Anticommutativity) {\bf v}\times {\bf w}=-({\bf w}\times {\bf v})
  5. (Nilpotency) {\bf v}\times {\bf v}={\bf 0}
  6. (Related to Nilpotency) {\bf v}\times {\bf w}={\bf 0} if and only if {\bf w}=\lambda{\bf v} for some scalar \lambda or {\bf v}={\bf 0}.
  7. (Associativity/Commutativity of Scalar Multiplication) (\lambda {\bf v})\times {\bf w}={\bf v}\times (\lambda {\bf w})=\lambda ({\bf v}\times {\bf w})
  8. (Distributivity over Vector Addition) ({\bf u}+{\bf v})\times {\bf w}={\bf u}\times {\bf w}+{\bf v}\times {\bf w} and {\bf u}\times ({\bf v}+{\bf w})={\bf u}\times {\bf v}+{\bf u}\times {\bf w}
  9. (Area of Parallelogram Spanned) If the vectors are nonzero, ||{\bf v}\times {\bf w}|| is the area of the parallelogram “spanned” by {\bf v} and {\bf w} (the parallelogram that arises from the parallelogram law for vector addition).
The Right Hand Rule

The right-hand rule of property 3 above can be described with words as follows. Given two nonzero vectors {\bf v} and {\bf w} inside three-dimensional space, determine the direction of {\bf v}\times {\bf w} by taking your right hand and orienting it so that you can curl your fingers from {\bf v} toward {\bf w}. Then, if you point your thumb out away from your hand, it will point in the direction of {\bf v}\times {\bf w}.

The cross product has many applications, especially in physics and engineering. However, one of its most basic and important applications is to analytic geometry and linear algebra in three-dimensions.

Example 1

Here is the basic and important application of the cross-product in three-dimensional linear algebra: find an equation of the plane containing three noncollinear points. This will also require us to use a property of dot products: two nonzero vectors are perpendicular (orthogonal) if and only if their dot product is zero.

Suppose our points P, Q, and R have rectangular coordinates (1,2,3), (7,5,-2), and (-4,-3,6), respectively. The vectors {\bf v}=\left[\begin{array}{c} -4-1 \\ -3-2 \\ 6-3 \end{array}\right]=\left[\begin{array}{c} -5 \\ -5 \\ 3 \end{array}\right] and {\bf w}=\left[\begin{array}{c} 7-1 \\ 5-2 \\ -2-3 \end{array}\right]=\left[\begin{array}{c} 6 \\ 3 \\ -5 \end{array}\right] will then be parallel to the plane containing these points. Think about this! Note that, essentially, {\bf v} is Q-P (a “displacement vector” from P to Q) and {\bf w} is R-P (a “displacement vector” from P to R).

A Normal Vector to the Plane

Let {\bf n}={\bf v}\times {\bf w}. Using the formula above, we find that {\bf n}=\left[\begin{array}{c} 16 \\ -7 \\ 15 \end{array}\right]. Check this!

By property 1 of cross products, {\bf n} will be perpendicular to the plane containing P, Q, and R. The vector {\bf n} is often called a normal vector to the plane when it is perpendicular to the plane.

An Equation for the Plane

If (x,y,z) is an arbitrary point in the plane, then, for example, {\bf u}=\left[\begin{array}{c} x-1 \\ y-2 \\ z-3 \end{array}\right] will be parallel to the plane. Therefore {\bf n}\cdot {\bf u}=0. This is essentially an equation for the plane!

All we need to do now is just expand this equation. Since {\bf n}\cdot {\bf u}=\left[\begin{array}{c} 16 \\ -7 \\ 15 \end{array}\right]\cdot \left[\begin{array}{c} x-1 \\ y-2 \\ z-3 \end{array}\right]=16(x-1)-7(y-2)+15(z-3), an equation for the plane is 16(x-1)-7(y-2)+15(z-3)=0. If we like, this can be rearranged as 16x-7y+15z=16-14+45=47.

Here is a picture to illustrate all of this. The normal vector {\bf n} has been scaled down by a factor of two in length to make the picture look nicer.

The vectors {\bf v} and {\bf w} are parallel to the plane containing P, Q, and R. The vector {\bf n}={\bf v}\times {\bf w} is perpendicular (normal) to the plane (and so is {\bf n}/2).

Matrices and Linear Transformations in Three Dimensions

In Section 1.5, “Matrices and Linear Transformations in Low Dimensions”, we saw that matrix multiplication can be used to represent the formulas for linear transformations when the domain and codomain are of dimensions two or smaller. This observation carries over to three and higher dimensions.

Linear Transformations {\Bbb R}\longrightarrow {\Bbb R}^{3}

A linear transformation T:{\Bbb R}\longrightarrow {\Bbb R}^{3} is represented by a 3\times 1 matrix multiplied by a 1\times 1 matrix (a number) to give a three-dimensional vector.

T({\bf x})=A{\bf x}=A[x]=\left[\begin{array}{c} a \\ b \\ c \end{array}\right][x]=\left[\begin{array}{c} ax \\ bx \\ cx \end{array}\right]=x\left[\begin{array}{c} a \\ b \\ c \end{array}\right]

This is a linear combination (scalar multiple) of the (one) vector \left[\begin{array}{c} a \\ b \\ c \end{array}\right] with x as the weight.

Such a transformation (function) can be visualized as a parametric curve in three-dimensional space. If a\not=0, b\not=0, or c\not=0, it will be a line through the origin. In that case, the null space (kernel) will be the one-element set \{0\} and the image will be the line. Furthermore, in that case, the transformation will be one-to-one but not onto.

If a=b=c=0, then the transformation is neither one-to-one not onto. The null space (kernel) will be \mbox{Nul}(A)=\mbox{Ker}(T)={\Bbb R} and the image will be \mbox{Im}(T)=\{{\bf 0}\}.

The image of T is also called the column space of A and we also write \mbox{Col}(A)=\mbox{Im}(T). This terminology will be explained in sections to follow.

Linear Transformations {\Bbb R}^{2}\longrightarrow {\Bbb R}^{3}

A linear transformation T:{\Bbb R}^{2}\longrightarrow {\Bbb R}^{3} is represented by a 3\times 2 matrix multiplied by a 2\times 1 vector to give a three-dimensional vector.

T({\bf x})=A{\bf x}=\left[\begin{array}{cc} a & b \\ c & d \\ e & f \\  \end{array}\right]\left[\begin{array}{c} x \\ y \end{array}\right]=\left[\begin{array}{c} ax+by \\ cx+dy \\ ex+fy \end{array}\right]=x\left[\begin{array}{c} a \\ c \\ e \end{array}\right]+y\left[\begin{array}{c} b \\ d \\ f \end{array}\right]

This is a linear combination of the vectors \left[\begin{array}{c} a \\ c \\ e \end{array}\right] and \left[\begin{array}{c} b \\ d \\ f \end{array}\right] with x and y as the corresponding weights.

If these last two vectors are nonzero and not parallel, this transformation can be visualized as a parametric surface in three-dimensional space. In fact, it will be a plane through the origin in that case.

Continuing in that case, we can say \mbox{Nul}(A)=\mbox{Ker}(T)=\{{\bf 0}\}=\left\{\left[\begin{array}{c} 0 \\ 0 \end{array}\right]\right\} and the transformation will be one-to-one. We can also say that \mbox{Col}(A)=\mbox{Im}(T) will be the plane from the previous paragraph, and the transformation will not be onto.

If the vectors forming the columns of A are nonzero but parallel, or if one column is zero while the other is not, then \mbox{Nul}(A)=\mbox{Ker}(T) will be a line through the origin in {\Bbb R}^{2} while \mbox{Col}(A)=\mbox{Im}(T) will be a line through the origin in {\Bbb R}^{3}. If A is the “zero matrix”, then \mbox{Nul}(A)=\mbox{Ker}(T) will be all of {\Bbb R}^{2} while \mbox{Col}(A)=\mbox{Im}(T) will just be the origin.

In both of these cases from the previous paragraph, the transformation is neither one-to-one nor onto.

Linear Transformations {\Bbb R}^{3}\longrightarrow {\Bbb R}^{3}

A linear transformation T:{\Bbb R}^{3}\longrightarrow {\Bbb R}^{3} is represented by a 3\times 3 matrix multiplied by a 3\times 1 vector to give a three-dimensional vector.

T({\bf x})=A{\bf x}=\left[\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \\  \end{array}\right]\left[\begin{array}{c} x \\ y \\ z \end{array}\right]=\left[\begin{array}{c} ax+by+cz \\ dx+ey+fz \\ gx+hy+iz \end{array}\right] =x\left[\begin{array}{c} a \\ d \\ g \end{array}\right]+y\left[\begin{array}{c} b \\ e \\ h \end{array}\right]+z\left[\begin{array}{c} c \\ f \\ i \end{array}\right]

This is a linear combination of the vectors \left[\begin{array}{c} a \\ d \\ g \end{array}\right], \left[\begin{array}{c} b \\ e \\ h \end{array}\right], and \left[\begin{array}{c} c \\ f \\ i \end{array}\right] with x, y and z as the corresponding weights.

If these last three vectors are nonzero and not all on the same plane through the origin when they are based at the origin, then \mbox{Nul}(A)=\mbox{Ker}(T)=\{{\bf 0}\}=\left\{\left[\begin{array}{c} 0 \\ 0  \\ 0 \end{array}\right]\right\} and \mbox{Col}(A)=\mbox{Im}(T)={\Bbb R}^{3}. In this case, the transformation will be one-to-one and onto.

If any of the three vectors is zero or if they all lie on the same plane, then the transformation will be neither one-to-one nor onto. Its image will either be just the origin, a line through the origin, or a plane through the origin. The null space will either be all of three-dimensional space, a plane through the origin, or a line through the origin.

The ordering of the geometric objects in the last two sentences is not arbitrary. If the image is a plane through the origin then the null space will be a line through the origin. No matter what the situation is, the dimensions of the null space and the image will add up to three!

This is no accident! We are catching a glimpse of a mathematical truth called the Rank-Nullity Theorem (or just “Rank Theorem” for short).

Linear Transformations {\Bbb R}^{3}\longrightarrow {\Bbb R}^{2}

A linear transformation T:{\Bbb R}^{3}\longrightarrow {\Bbb R}^{2} is represented by a 2\times 3 matrix multiplied by a 3\times 1 vector to give a two-dimensional vector.

T({\bf x})=A{\bf x}=\left[\begin{array}{ccc} a & b & c \\ d & e & f   \end{array}\right]\left[\begin{array}{c} x \\ y \\ z \end{array}\right]=\left[\begin{array}{c} ax+by+cz \\ dx+ey+fz  \end{array}\right] =x\left[\begin{array}{c} a \\ d  \end{array}\right]+y\left[\begin{array}{c} b \\ e  \end{array}\right]+z\left[\begin{array}{c} c \\ f  \end{array}\right]

This is a linear combination of the vectors \left[\begin{array}{c} a \\ d  \end{array}\right], \left[\begin{array}{c} b \\ e  \end{array}\right], and \left[\begin{array}{c} c \\ f \end{array}\right] with x, y and z as the corresponding weights.

If at least two of these three vectors are nonzero and not parallel, then the transformation will be onto. Otherwise, it will not be onto. Such a transformation will never be one-to-one. It null space will either be a line through the origin, a plane through the origin, or all of {\Bbb R}^{3}.

Linear Transformations {\Bbb R}^{3}\longrightarrow {\Bbb R}

A linear transformation T:{\Bbb R}^{3}\longrightarrow {\Bbb R} is represented by a 1\times 3 matrix multiplied by a 3\times 1 vector to give a one-dimensional vector (a number).

T({\bf x})=A{\bf x}=\left[\begin{array}{ccc} a & b & c    \end{array}\right]\left[\begin{array}{c} x \\ y \\ z \end{array}\right]=\left[\begin{array}{c} ax+by+cz   \end{array}\right] =x\left[\begin{array}{c} a   \end{array}\right]+y\left[\begin{array}{c} b   \end{array}\right]+z\left[\begin{array}{c} c   \end{array}\right]

This is a linear combination of the vectors (numbers) \left[\begin{array}{c} a \end{array}\right], \left[\begin{array}{c} b   \end{array}\right], and \left[\begin{array}{c} c \end{array}\right] with x, y and z as the corresponding weights.

If a\not=0, b\not=0, or c\not=0, then the transformation will be onto. If a=b=c=0, then the image will just be the origin. Such a transformation will never be one-to-one. Its null space will either be a plane through the origin or all of {\Bbb R}^{3}.

Row Operations Can Be Used to Answer Questions

In any of these situations of three-dimensional linear algebra, elementary row operations can be used to answer questions about these properties.

Examples of Linear Transformations

In this section we will use elementary row operations in a couple three-dimensional linear algebra examples. More examples will be considered in the exercises.

Example 2

Determine the null space and image of the linear transformation T:{\Bbb R}^{2}\longrightarrow {\Bbb R}^{3} defined by the following formula.

T({\bf x})=A{\bf x}=\left[\begin{array}{cc} 4 & -1 \\ 1 & 2 \\ -5 & -3 \end{array} \right]\left[\begin{array}{c} x \\ y  \end{array}\right]=\left[\begin{array}{c} 4x-y \\ x+2y \\ -5x-3y \end{array}\right]

Null Space

Finding the null space (kernel) \mbox{Nul}(A)=\mbox{Ker}(T) is equivalent to solving the following homogeneous system of linear equations.

\begin{cases}\begin{array}{ccccc} 4x & - & y & = & 0 \\ x & + & 2y & = & 0 \\ -5x & - & 3y & = & 0\end{array}\end{cases}

Now we show elementary row operations on the corresponding augmented matrix. Start by swapping rows 1 and 2. Then replace row 2 by -4 times row 1.

\left[\begin{array}{ccc} 4 & -1 & 0 \\ 1 & 2 & 0 \\ -5 & -3 & 0 \end{array}\right]\xrightarrow{R_{1}\leftrightarrow R_{2}}\left[\begin{array}{ccc} 1 & 2 & 0 \\ 4 & -1 & 0 \\ -5 & -3 & 0 \end{array}\right]\xrightarrow{-4R_{1}+R_{2}\rightarrow R_{2}}\left[\begin{array}{ccc} 1 & 2 & 0 \\ 0 & -9 & 0 \\ -5 & -3 & 0 \end{array}\right]

Next, do a replacement operation and a multiplication of a row by a nonzero constant.

\left[\begin{array}{ccc} 1 & 2 & 0 \\ 0 & -9 & 0 \\ -5 & -3 & 0 \end{array}\right]\xrightarrow{5R_{1}+R_{3}\rightarrow R_{3}}\left[\begin{array}{ccc} 1 & 2 & 0 \\ 0 & -9 & 0 \\ 0 & 7 & 0 \end{array}\right]\xrightarrow{-\frac{1}{9}R_{2}\rightarrow R_{2}}\left[\begin{array}{ccc} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 0 & 7 & 0 \end{array}\right]

We are almost done. A couple more replacement operations are in order.

\left[\begin{array}{ccc} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 0 & 7 & 0 \end{array}\right]\xrightarrow{-7R_{2}+R_{3}\rightarrow R_{3}}\left[\begin{array}{ccc} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array}\right]\xrightarrow{-2R_{2}+R_{1}\rightarrow R_{1}}\left[\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array}\right]

This implies that the reduced equivalent system consists of the two equations x=0 and y=0. In other words, the null space is just the origin: \mbox{Nul}(A)=\mbox{Ker}(T)=\left\{\left[\begin{array}{c} 0 \\ 0 \end{array}\right]\right\}.

Image

The linear transformation T cannot be onto. In fact, the image is a plane through the origin in {\Bbb R}^{3}. That is because it is the set of all linear combinations of the two non-parallel column vectors \left[\begin{array}{c} 4 \\ 1 \\ -5  \end{array} \right] and \left[\begin{array}{cc} -1 \\ 2 \\ -3 \end{array} \right] of A.

This is why the image of T is also called the column space of A!

Example 3

Determine the null space and image of the linear transformation T:{\Bbb R}^{3}\longrightarrow {\Bbb R}^{2} defined by the following formula.

T({\bf x})=A{\bf x}=\left[\begin{array}{ccc} 4 & 1 & -5 \\ -1 & 2 & -3  \end{array} \right]\left[\begin{array}{c} x \\ y \\ z \end{array}\right]=\left[\begin{array}{c} 4x+y-5z \\ -x+2y-3z \end{array}\right]
Null Space

Finding the null space (kernel) \mbox{Nul}(A)=\mbox{Ker}(T) is equivalent to solving the following homogeneous system of linear equations.

\begin{cases}\begin{array}{ccccccc} 4x & + & y & - & 5z  & = & 0 \\ -x & + & 2y & - & 3z & = & 0\end{array}\end{cases}

Here are some elementary row operations on the corresponding augmented matrix.

\left[\begin{array}{cccc} 4 & 1 & -5 & 0 \\ -1 & 2 & -3 & 0\end{array}\right]\xrightarrow{R_{1}\leftrightarrow R_{2}}\left[\begin{array}{cccc}  -1 & 2 & -3 & 0 \\ 4 & 1 & -5 & 0\end{array}\right]\xrightarrow{-1R_{1}\rightarrow R_{1}}\left[\begin{array}{cccc}  1 & -2 & 3 & 0 \\ 4 & 1 & -5 & 0\end{array}\right]

Next,

\left[\begin{array}{cccc}  1 & -2 & 3 & 0 \\ 4 & 1 & -5 & 0\end{array}\right]\xrightarrow{-4R_{1}+R_{2}\rightarrow R_{2}}\left[\begin{array}{cccc}  1 & -2 & 3 & 0 \\ 0 & 9 & -17 & 0\end{array}\right]\xrightarrow{\frac{1}{9}R_{2}\rightarrow R_{2}}\left[\begin{array}{cccc}  1 & -2 & 3 & 0 \\ 0 & 1 & -\frac{17}{9} & 0 \end{array}\right]

Finally,

\left[\begin{array}{cccc}  1 & -2 & 3 & 0 \\ 0 & 1 & -\frac{17}{9} & 0 \end{array}\right]\xrightarrow{2R_{2}+R_{1}\rightarrow R_{1}}\left[\begin{array}{cccc}  1 & 0 & -\frac{7}{9} & 0 \\ 0 & 1 & -\frac{17}{9} & 0 \end{array}\right]

The reduced equivalent system is therefore the following.

\begin{cases}\begin{array}{ccccccc} x & & & - & \frac{7}{9}z & = & 0 \\ & & y & - & \frac{17}{9}z & = & 0 \end{array} \end{cases}

In other words, z is a free variable and x and y are basic variables. The null space (kernel) can be described in parametric vector form as shown below.

\mbox{Nul}(A)=\mbox{Ker}(T)=\left\{\left[\begin{array}{c} \frac{7}{9}z \\ \frac{17}{9}z \\ z\end{array}\right]\, \biggr|\, z\in {\Bbb R}\right\}=\left\{z\left[\begin{array}{c} \frac{7}{9} \\ \frac{17}{9} \\ 1\end{array}\right]\, \biggr|\, z\in {\Bbb R}\right\}

This represents a line through the origin in {\Bbb R}^{3}. This linear transformation is definitely not one-to-one.

Image

The image of T is the set of all linear combinations of the three vectors \left[\begin{array}{c} 4 \\ -1 \end{array}\right], \left[\begin{array}{c} 1 \\ 2 \end{array}\right], and \left[\begin{array}{c} -5 \\ -3 \end{array}\right]. This is definitely all of {\Bbb R}^{2}. We can say that \mbox{Col}(A)=\mbox{Im}(T)={\Bbb R}^{2}. This linear transformation is definitely onto.

Exercises on Three-Dimensional Linear Algebra

  1. Determine the kernel and image of the linear transformation T:{\Bbb R}^{2}\longrightarrow {\Bbb R}^{3} defined by the following formula: T({\bf x})=A{\bf x}=\left[\begin{array}{cc} -3 & 7 \\ 5 & -4 \\ 1 & 2 \end{array} \right]\left[\begin{array}{c} x \\ y  \end{array}\right]=\left[\begin{array}{c} -3x+7y \\ 5x-4y \\ x+2y \end{array}\right]. Also decide whether T is one-to-one and/or onto.
  2. Determine the kernel and image of the linear transformation T:{\Bbb R}^{3}\longrightarrow {\Bbb R}^{2} defined by the following formula: T({\bf x})=A{\bf x}=\left[\begin{array}{ccc} -3 & 5 & 1 \\ 7 & -4 & 2 \end{array} \right]\left[\begin{array}{c} x \\ y \\ z  \end{array}\right]=\left[\begin{array}{c} -3x+5y+z \\ 7x-4y+2z  \end{array}\right]. Also decide whether T is one-to-one and/or onto.
  3. Determine the kernel and image of the linear transformation T:{\Bbb R}^{3}\longrightarrow {\Bbb R}^{3} defined by the following formula: T({\bf x})=A{\bf x}=\left[\begin{array}{ccc} 5 & 10 & -5 \\ -6 & -4 & 2 \\ 2 & -3 & 5 \end{array} \right]\left[\begin{array}{c} x \\ y \\ z \end{array}\right]=\left[\begin{array}{c} 5x+10y-5z \\ -6x-4y+2z \\ 2x-3y+5z \end{array}\right]. Also decide whether T is one-to-one and/or onto.
  4. Determine the kernel and image of the linear transformation T:{\Bbb R}^{2}\longrightarrow {\Bbb R}^{3} defined by the following formula: T({\bf x})=A{\bf x}=\left[\begin{array}{cc} 1 & 2 \\ 3 & 6 \\ 4 & 8 \end{array} \right]\left[\begin{array}{c} x \\ y  \end{array}\right]=\left[\begin{array}{c} x+2y \\ 3x+6y \\ 4x+8y \end{array}\right]. Also decide whether T is one-to-one and/or onto.
  5. Determine the kernel and image of the linear transformation T:{\Bbb R}^{3}\longrightarrow {\Bbb R}^{2} defined by the following formula: T({\bf x})=A{\bf x}=\left[\begin{array}{ccc} 1 & 3 & 4 \\ 2 & 6 & 8 \end{array} \right]\left[\begin{array}{c} x \\ y \\ z  \end{array}\right]=\left[\begin{array}{c} x+3y+4z \\ 2x+6y+8z  \end{array}\right]. Also decide whether T is one-to-one and/or onto.
  6. Find an equation of the plane containing the points P=(4,7,-9), Q=(-2,3,-3), and R=(-5,-8,4).
  7. Prove the algebraic properties of vectors (Theorem 1.6.1) by using the corresponding properties of real numbers.
  8. Prove the first five properties of Theorem 1.6.2.

Challenge Exercises

  1. Prove property 6 of Theorem 1.6.2.
  2. Prove the properties of cross products (Theorem 1.6.3).

Video for Section 1.6

Here is a video overview of the content of this section.