6.1. Working with Vectors¶
Vectors and Matrices in MATLAB defines vectors and discusses how to create and use them in MATLAB. Here we define some linear algebra operations on vectors.
Vectors have scalar coefficients defining their displacement in each dimension of the coordinate axis system. We can think of a vector as having a specific length and direction. What we don’t always know about a vector is where it begins. Lacking other information about a vector, we can consider that the vector begins at the origin. Points are sometimes confused for vectors because they are also defined by a set of coefficients for each dimension. The values of points are always relative to the origin. Vectors may be defined as spanning between two points (); and one of the points may be the origin.
Figure Fig. 6.1 shows the relationship of points and vectors relative to the coordinate axes.
Note that vectors are usually stored as column vectors in MATLAB. One might accurately observe that the data of a vector is a set of numbers, which is neither a row vector nor a column vector [KLEIN13]. However, vectors in MATLAB are stored as either row vectors or column vectors. The need to multiply a vector by a matrix using inner products (defined in
Dot Product or Inner Product) motivates toward using column vectors. Because column vectors take more space when displayed in written material, they are often displayed in one of two alternate ways—either as the transpose of a row vector or with parentheses .
Note
The word vector comes from the Latin word meaning “carrier”.
6.1.1. Linear Vectors¶
Vectors are linear, which as shown in figure Fig. 6.2 means that they can be added together () and scaled by multiplication by a scalar constant (). Vectors may also be defined as a linear combinations of other vectors ().
6.1.2. Independent Vectors¶
A vector is independent of other vectors within a set if no linear combination of other vectors defines the vector. Consider the following set of four vectors.
The first three vectors of the set are not independent because . Likewise, and . However, is independent of the other three vectors.
Sometimes we can observe dependent relationships, but it is often
difficult to see the relationships. MATLAB has a few functions that will
help one test for independence and even see what the dependent
relationships are. Tests for independence include the rank
and
det
functions discussed in Invertible Test. The
rref
function (see Reduced Row Echelon Form) and cr
function
show the relationship between dependent vectors.
The cr
function is not part of MATLAB, but described in The Column and Row Factorization.
Section Linearly Independent Vectors gives an more mathematically formal definition of linearly independent vectors.
6.1.3. Transpose¶
The transpose of either a vector or matrix is the reversal of the rows for the columns.
In MATLAB, the transpose operator is the apostrophe (’
):
>> A_trans = A';
Note
The .’
operator performs a simple transpose, and ’
performs a
complex conjugate transpose, also referred to as a Hermitian
transpose. For matrices with only real numbers, the result of the two
operators is the same. For matrices with complex numbers, the complex
conjugate is usually desired. In mathematical literature, the simple
transpose is almost always represented as , and
the Hermitian transpose is usually either denoted as
, , or
.
6.1.4. Dot Product or Inner Product¶
The sum of products between the elements of two vectors is called a dot product. The operation yields a scalar.
-
Inner Product
An inner product is the sum of products between a row vector and a column vector. The multiply operator (
*
) in MATLAB performs an inner product. Thus to calculate a dot product, we only need to multiply the transpose of the first vector by the second vector. The resulting scalar is the dot product of the vectors.For example,
The inner product is also used to calculate each value of the product between matrices. Each element of the product is an inner product between a row of the left matrix and a column of the right matrix.
MATLAB has a dot
function that takes two vector arguments and
returns the scalar dot product. However, it is often just as easy to
implement a dot product using inner product multiplication.
>> a = [3; 5];
>> b = [2; 4];
>> c = dot(a,b)
c =
26
>> c = a'*b
c =
26
Note
If the vectors are both row vectors (nonstandard), then the dot product becomes .
6.1.5. Dot Product Properties¶
6.1.5.1. Commutative¶
The dot product equals . The order does not matter.
6.1.5.2. Length of Vectors¶
The length of the vector , , is the square root of the dot product of with itself.
MATLAB has a function called norm
that takes a vector as an argument
and returns the length. The Euclidean length of a vector is called the
-norm, which is the default for the norm
function. See
Norms for information on other vector length measurements.
A normalized or unit vector is a vector of length one. A vector may be normalized by dividing it by its length.
6.1.5.3. Angle Between Vectors¶
Consider two unit vectors (length 1) in as shown in figure Fig. 6.3, and . Then consider the same vectors rotated by an angle , such that . Refer to a table of trigonometry identities to verify the final conclusion below.
(6.4)¶
The result of equation (6.4) can also be found from the law of cosines, which tells us that
Since both and are unit vectors, several terms become 1.
When the vectors are not unit vectors, the vector lengths factor out as constants. A unit vector is obtained by dividing the vector by its length.
(6.5)¶
All angles have . So all vectors have:
6.1.5.4. Orthogonal Vector Test¶
When two vectors are perpendicular ( or ), then from equation (6.5) their dot product is zero. This property extends to and beyond, where we say that the vectors in are orthogonal when their dot product is zero.
This is an important result. The geometric properties of orthogonal vectors provide useful strategies for finding optimal solutions for some problems. One example of this is demonstrated in Over-determined Systems and Vector Projections.
Perpendicular Vectors
By the Pythagorean theorem, if and are perpendicular, then as shown in figure Fig. 6.4
>> u = [0; 1];
>> v = [1; 0];
>> w = v - u
w =
1
-1
>> v'*v + u'*u
ans =
2
>> w'*w
ans =
2
6.1.6. Applications of Dot Products¶
6.1.6.1. Perpendicular Rhombus Vectors¶
As illustrated in figure Fig. 6.5, a rhombus is any parallelogram whose sides are the same length. Let us use the properties of dot products to show that the diagonals of a rhombus are perpendicular.
Matrix Transpose Properties lists transpose properties including the transpose with respect to addition, . We also need to use the fact that the sides of a rhombus are the same length. We begin by setting the dot product of the diagonals equal to zero.
(6.6)¶
Each term of equation (6.6) is a scalar and dot products are commutative, so the middle two terms cancel each other and we are left with the requirement that the lengths of the sides of the rhombus be the same, which is in the definition of a rhombus.
6.1.6.2. Square Corners¶
When building something in the shape of a rectangle, we want the four corners to be square (perpendicular). A common test to verify that the corners are square is to measure the length of the diagonals between opposite corners. If the corners are square, then the lengths of the diagonals will be equal. Here we will prove that the test is valid.
With reference to Fig. 6.6, we want to show that . The dot product of a vector with itself gives the length squared, which if equivalent proves that the lengths are equivalent. Recall that and are equal scalar values.
(6.7)¶
Several terms of equation (6.7) cancel out. Equation (6.7) can only be true when , which is only satisfied when and are perpendicular ().
6.1.6.3. Find a Perpendicular Vector¶
Let us consider a mobile robotics problem. The robot needs to execute a wall–following algorithm. We will use two of the robot’s distance sensors — one perpendicular to the robot’s direction of travel and one at . As shown in figure Fig. 6.7, we establish points and from the sensors in the robot’s coordinate frame along the wall and find vector . We need to find a short-term goal location for the robot to drive toward that is perpendicular to the wall at a distance of from the point . Using vectors and , we can find an equation for vector such that is perpendicular to , and begins at the origin of . With the coordinates of , the goal location is a simple calculation.
We want to identify the vector from the terminal point of to the terminal point of as where is a scalar and is a unit vector in the direction of .
(6.8)¶
We find the answer by starting with the orthogonality requirement between vectors and .
Since is a unit vector, and drops out of the equation. We return to equation (6.8) to find the equation for .
(6.9)¶
Figure Fig. 6.8 shows an example starting with the sensor measurements for points and . Finding the vector perpendicular to wall with equation (6.9) makes finding the short-term goal location quite simple. We are using sensors on the robot at and , and the coordinate frame is that of the robot’s, so is on the axis and is on a line at from the robot. The robot begins 50 cm from the wall.
>> p1 = [0; 50];
>> p2 = [40; 40];
>> b = p2 - p1;
>> bhat = b/norm(b);
>> a = p1;
>> c = a - bhat*(bhat'*a);
>> k = 50;
>> goal = p2 - k*c/norm(c)
goal =
27.873
-8.507
6.1.7. Outer Product¶
Whereas the inner product (dot product) of two vectors is a scalar (), the outer product of two vectors is a matrix. Each element of the outer product is a product of an element from the left column vector and an element from the right row vector. Outer products are not used nearly as often as inner products, but in Singular Value Decomposition (SVD) we will see an important application of outer products.
6.1.8. Dimension and Space¶
Basis vectors provide the coordinate axes used to define the coordinates of points and vectors. The Cartesian basis vectors are defined by the two columns of the identity matrix, and . A plot of a portion of the Cartesian vector space is shown in figure Fig. 6.9.
>> c_basis = eye(2)
c_basis =
1 0
0 1
A vector space consists of a set of vectors and a set of scalars that are closed under vector addition and scalar multiplication. Saying that they are closed just means that we can add any vectors in the vector space together and multiply any vectors in the space by a scalar and the resulting vectors are still in the vector space. The set of all possible vectors in a vector space is called the span of the vector space.
For example, vectors in our physical 3-dimensional world are said to be in a vector space called . Vectors in consist of three real numbers defining their magnitude in the , , and directions. Similarly, vectors on a 2-D plane, like a piece of paper, are said to be in the vector space called .
Let us clarify a subtle matter of definitions. We normally think of the term dimension as meaning how many coordinate values are used to define points and vectors, which is accurate for coordinates relative to their basis vectors. However, when points in a custom coordinate system are mapped to Cartesian coordinates via multiplication by the basis vectors, then the number of elements matches the number of elements in the basis vectors rather than the number of basis vectors. In Vector Spaces, we give a more precise definition of dimension as the number of basis vectors in the vector space.
In figure Fig. 6.10, a custom vector space has basis vectors as the columns of the matrix. Even though the basis vectors are in Cartesian coordinates, the vector space is 2-dimensional since there are two basis vectors and the span of the vector space is a plane.
Notice that the two basis vectors are orthogonal to each other. Since the basis vectors are in the columns of , show us that the columns are unitary (length of one and orthogonal to each other).
>> W
W =
0.8944 -0.0976
0 0.9759
0.4472 0.1952
>> point = [1; 2]
point =
1
2
>> point_cartesian = W*point
point_cartesian =
0.6992
1.9518
0.8376
>> W'*W
ans =
1.0000 0.0000
0.0000 1.0000
Other vector spaces may also be used for applications not relating to geometry and may have higher dimension than 3. Generally, we call this . For some applications, the coefficients of the vectors and scalars may also be complex numbers, which is a vector space denoted as .