13.2. Vector Spaces

Here we will define some linear algebra terms and explain some foundational concepts related to vector spaces, dimension, linear independent vectors, and rank.

13.2.1. Vector Space Definitions

Vector Space

A vector space consists of a set of vectors and a set of scalars that are closed under vector addition and scalar multiplication. By saying that they are closed just means that we can add any vectors in the set together and multiply vectors by any scalars in the set and the resulting vectors are still in the vector space.

For example, vectors in our physical 3-dimensional world are said to be in a vector space called \mathbb{R}^3. If the basis vectors of the space are the standard orthogonal coordinate frame then the set of vectors in \mathbb{R}^3 consist of three real numbers defining their magnitude in the \bm{x}, \bm{y}, and \bm{z} directions. The set of scalars in \mathbb{R}^3 is the set of real numbers.

Span

The set of all linear combinations of a collection of vectors is called the span of the vectors. If a vector can be expressed as a linear combination of a set of vectors, then the vector is in the span of the set.

Basis

The smallest set of vectors needed to span a vector space forms a basis for that vector space. The vectors in the vector space are defined in terms of the basis vectors. For example, the basis vectors used for a Cartesian coordinate frame in the vector space \mathbb{R}^3 are:

\left\{ \vector{1; 0; 0}, \; \vector{0; 1; 0}, \;
\vector{0; 0; 1} \right\}

Many different basis vectors could be used, but we have a preference for an orthonormal, linearly independent set of vectors. In addition to the Cartesian coordinates, other sets of basis vectors are sometimes used. For example, some robotics applications may use basis vectors that span a plane corresponding to a work piece. Other applications, such as difference equations, use the set of eigenvectors of a matrix as the basis vectors.

Dimension

The number of vectors in the basis gives the dimension of the vector space. For the Cartesian basis vectors, \mathbb{R}^3 and \mathbb{R}^2, this is also the number of elements in the vectors. However, other vector spaces may have a smaller dimension. For example, the subspace where the Z-axis is the same as the X-axis (plotted in Fig. 13.2) forms a plane with dimension 2. The orthonormal basis vectors for this subspace might be

\left\{ \bm{v}_1, \bm{v}_2 \right\} =
\left\{ \frac{1}{\sqrt{2}}\,\vector{1; 0; 1}, \vector{0; 1; 0} \right\}.

We may reference points or vectors in the subspace with 2-dimensional coordinates, (a, b). Linear combinations of the basis vectors are used to find the coordinates of the point in the \mathbb{R}^3 world.

\vector{a; b} = a\,\bm{v}_1 + b\,\bm{v}_2

../_images/vectSpace.png

Fig. 13.2 Vector Space where Z = X, Dimension = 2.

13.2.2. Linearly Independent Vectors

The official definition of linearly independent vectors may at first seem a bit obscure, but it is worth stating along with some explanation and examples.

The set of vectors, {\bm{u}_1, \bm{u}_2, \ldots
\bm{u}_n}, are linearly independent if for any scalars c_1, c_2
\ldots c_n, the equation c_1\,\bm{u}_1 + c_2\,\bm{u}_2 +
\ldots + c_n\,\bm{u}_n = 0 has only the solution c_1 = c_2 =
\ldots = c_n = 0.

This means that no vector in the set is a linear combination of other vectors in the set. If one or more vectors in the set can be found by a linear combination of other vectors in the set, then the vectors are not linearly independent and there are non-zero coefficients, c_i, that will satisfy the equation c_1\,\bm{u}_1 + c_2\,\bm{u}_2 + \ldots + c_n\,\bm{u}_n = 0.

For example, the basis vectors used for a Cartesian coordinate frame in the vector space \mathbb{R}^3 are linearly independent.

\left\{ \vector{1; 0; 0}, \; \vector{0; 1; 0}, \;
\vector{0; 0; 1} \right\}

Whereas, the following set of vectors are not linearly independent because the last vector is a sum of the first two vectors.

\left\{ \vector{1; 0; 1}, \; \vector{0; 1; 0}, \;
\vector{1; 2; 1} \right\}

Some phrases that are used to describe a vector that is linearly dependent on a set of vectors is: “In the span of” or “In the space”. For example, when we evaluate the columns of a matrix, \bf{A}, with regard to another vector, \bm{b}, we might say that vector \bm{b} is in the column space of \bf{A}, which means that \bf{b} is a linear combination of the vectors that define the columns of matrix \bf{A}.

13.2.3. Rank

The rank function can be used to test the linear independence of vectors that make up the columns of a matrix.

The rank of a matrix can be determined by two different methods. The rank of a matrix is the number of non-zero pivots in its row-echelon form, which is achieved by Gaussian Elimination. Here is an example illustrating how elimination on a singular matrix results a pivot value being equal to zero. Because of dependent relationships between the rows and columns, the row operations to change lower diagonal elements to zeros also move the later pivots to zero. In this example, the third column is the sum of the first column and two times the second column.

\mat{\underline{3}, 2, 7; 0, 3, 6; 1, 0, 1}

Add -1/3 of row 1 to row 3. The pivot then moves to row 2.

\mat{3, 2, 7; 0, \underline{3}, 6; 0, -2/3, -4/3}

Add 2/9 of row 2 to row 3.

\mat{3, 2, 7; 0, 3, 6; 0, 0, 0}

Thus, with 2 non-zero pivots, the rank of the matrix is 2.

The second method for computing rank is to use the singular value decomposition (SVD) as demonstrated in The Singular Value Decomposition. The number of singular values above a small tolerance is the rank of the matrix. The MATLAB rank function uses the SVD method.