13.2. Vector Spaces¶
Here we will define some linear algebra terms and explain some foundational concepts related to vector spaces, dimension, linear independent vectors, and rank.
13.2.1. Vector Space Definitions¶
-
Vector Space
A vector space consists of a set of vectors and a set of scalars that are closed under vector addition and scalar multiplication. By saying that they are closed just means that we can add any vectors in the set together and multiply vectors by any scalars in the set and the resulting vectors are still in the vector space.
For example, vectors in our physical 3-dimensional world are said to be in a vector space called . If the basis vectors of the space are the standard orthogonal coordinate frame then the set of vectors in consist of three real numbers defining their magnitude in the , , and directions. The set of scalars in is the set of real numbers.
-
Span
The set of all linear combinations of a collection of vectors is called the span of the vectors. If a vector can be expressed as a linear combination of a set of vectors, then the vector is in the span of the set.
-
Basis
The smallest set of vectors needed to span a vector space forms a basis for that vector space. The vectors in the vector space are defined in terms of the basis vectors. For example, the basis vectors used for a Cartesian coordinate frame in the vector space are:
Many different basis vectors could be used, but we have a preference for an orthonormal, linearly independent set of vectors. In addition to the Cartesian coordinates, other sets of basis vectors are sometimes used. For example, some robotics applications may use basis vectors that span a plane corresponding to a work piece. Other applications, such as difference equations, use the set of eigenvectors of a matrix as the basis vectors.
-
Dimension
The number of vectors in the basis gives the dimension of the vector space. For the Cartesian basis vectors, and , this is also the number of elements in the vectors. However, other vector spaces may have a smaller dimension. For example, the subspace where the Z-axis is the same as the X-axis (plotted in Fig. 13.2) forms a plane with dimension 2. The orthonormal basis vectors for this subspace might be
We may reference points or vectors in the subspace with 2-dimensional coordinates, . Linear combinations of the basis vectors are used to find the coordinates of the point in the world.
13.2.2. Linearly Independent Vectors¶
The official definition of linearly independent vectors may at first seem a bit obscure, but it is worth stating along with some explanation and examples.
The set of vectors, , are linearly independent if for any scalars , the equation has only the solution .
This means that no vector in the set is a linear combination of other vectors in the set. If one or more vectors in the set can be found by a linear combination of other vectors in the set, then the vectors are not linearly independent and there are non-zero coefficients, , that will satisfy the equation .
For example, the basis vectors used for a Cartesian coordinate frame in the vector space are linearly independent.
Whereas, the following set of vectors are not linearly independent because the last vector is a sum of the first two vectors.
Some phrases that are used to describe a vector that is linearly dependent on a set of vectors is: “In the span of” or “In the space”. For example, when we evaluate the columns of a matrix, , with regard to another vector, , we might say that vector is in the column space of , which means that is a linear combination of the vectors that define the columns of matrix .
13.2.3. Rank¶
The rank
function can be used to test the linear independence of
vectors that make up the columns of a matrix.
The rank of a matrix can be determined by two different methods. The rank of a matrix is the number of non-zero pivots in its row-echelon form, which is achieved by Gaussian Elimination. Here is an example illustrating how elimination on a singular matrix results a pivot value being equal to zero. Because of dependent relationships between the rows and columns, the row operations to change lower diagonal elements to zeros also move the later pivots to zero. In this example, the third column is the sum of the first column and two times the second column.
Add -1/3 of row 1 to row 3. The pivot then moves to row 2.
Add 2/9 of row 2 to row 3.
Thus, with 2 non-zero pivots, the rank of the matrix is 2.
The second method for computing rank is to use the singular value decomposition
(SVD) as demonstrated in The Singular Value Decomposition. The number of singular values above a
small tolerance is the rank of the matrix. The MATLAB rank
function uses the
SVD method.