6.13. Left-Divide Operator¶
We focused primarily on solutions to systems of linear equations in this
chapter. We express systems of linear equations in terms of the
problem. A constant reference point
in the discussion has been MATLAB’s left-divide or backslash operator
(\
), which efficiently finds for all but singular
equations. The results and applications are more important to us than
the details of MATLAB’s algorithms, but it is illuminating to briefly
discuss the algorithms in light of what we have learned about systems of
equations.
The mldivide
function is invoked by the left-divide operator. The
documentation does not tell us every secret behind the implementation of
the algorithms, but it does tell us the basic strategies. Please take a
look at the MATLAB documentation of mldivide
. The documentation
shows flow charts depicting how full and sparse matrices are evaluated
in terms of size, shape, and special properties to determine the most
efficient algorithm. Note that the work of mldivide
is usually a
progression from more complex problems toward simpler problems using
matrix factorization. In particular, lower and upper triangular matrices
are sought so that efficient triangular solvers can be used to find
solutions by forward or back substitution.
Right Division
As noted in Matrix Division, MATLAB also has a right-divide
operator (/
). It provides the solution to
, where
and are row vectors.
We have seldom mentioned it for two reasons. First,
the need for the right-divide operator is much less than for the
left-divide operator, especially in regard to systems of equations.
Secondly, right division is implemented using the left-divide
operator with the relationship b/A = (A’\b’)’
. So the algorithms
reviewed here for the left-divide operator also apply to the
right-divide operator.
6.13.1. Left-Divide of Critically-determined Systems¶
As discussed in LU Decomposition, when matrix is square, the primary algorithm is LU decomposition with row exchanges (partial pivoting). Matrices smaller than (real) or (complex) are always solved with LU decomposition. Other algorithms listed for larger square matrices take advantage of special structure features of the matrix to use more efficient matrix factorizations.
A square, full (not sparse) matrix is first checked to see if it is triangular or can be multiplied by a permutation matrix to become triangular. Triangular systems are solved with forward and back substitution.
Next it is checked to see if it is Hermitian ( Matrix Transpose Properties). If is Hermitian and positive definite then the symmetry allows quick factoring into the Cholesky decomposition , where is upper triangular (Cholesky Decomposition for Positive Definite Systems). Cholesky decomposition is attempted if appears to be positive definite. If the Cholesky decomposition fails because is actually not positive definite, then LU decomposition is used.
Note
MATLAB checks if the diagonal values of a Hermitian matrix are all positive or all negative, which is a necessary but not sufficient requirement for being positive definite. If the diagonal values are all negative, then the negative of is used, which may be positive definite. If the first two tests pass, then the Cholesky decomposition is attempted. See Cholesky Decomposition for Positive Definite Systems for more about positive definite matrices and the Cholesky decomposition.
A square matrix that is not triangular or Hermitian is solved with either LU decomposition as presented in LU Decomposition or with a variation of the LU decomposition algorithm that takes advantage of the structure of upper Hessenberg matrices. An upper Hessenberg matrix is one diagonal off from being upper triangular. It contains zeros below the first subdiagonal (just below the main diagonal). It may seem obscure to have a special solver for Hessenberg matrices, but they occur in many problems. Upper Hessenberg matrices also include the class of tridiagonal matrices that only contain nonzero elements on the main diagonal and the diagonals immediately above and below the main diagonal. Heat conduction and fluid flow problems often result in tridiagonal matrices [BUTT10].
6.13.2. Left-Divide of Over-determined Systems¶
As discussed in Over-determined Systems and Vector Projections, the solution to over-determined
systems is based on vector projections. It is an exact solution in the
rare event that vector is in the columns space of
, but it is usually an approximation. We have shown how
the least squares solution is found from the vector projection equation
or from the
pseudo-inverse (), which MATLAB finds
from the economy SVD. However, the mldivide
function uses QR
factorization to find the least squares solution. The QR algorithm is
described in QR Decomposition. It is a matrix factoring returning an
orthogonal matrix, , and a upper triangular matrix,
, such that . It is an
efficient and accurate algorithm. Using the result to find the solution
to our system of equations is also quick because is
orthogonal and is upper triangular. So finding the
solution only requires a matrix transpose, a matrix–vector
multiplication, and back substitution with the triangular solver.
Note that for an over-determined matrix, the matrix will
end with rows of all zeros, so we add an extra zero
argument to qr
to use the economy QR, which removes those rows.
Here is a quick example:
>> A = randi(20, 5, 2) - 10;
>> b = randi(20, 5, 1) - 10;
>> x = A\b % The left-divide solution
x =
0.6383
0.1901
>> [Q, R] = qr(A, 0); % Find the solution by QR
>> x = R\(Q'*b)
x =
0.6383
0.1901
6.13.3. Left-Divide of Under-determined Systems¶
As discussed in Under-determined Systems, the solution to
under-determined systems is an infinite set rather than a unique
solution. So any solution satisfying the equation
is correct, but some solutions are
better than others. The minimum length correct solution is desired. The
documentation for mldivide
states that the returned solution has at
most (the rank of , which is usually )
nonzero values. To produce the zero value results, it replaces
columns of with zeros before using the QR
factorization. The columns to zero out are found sequentially. The first
column zeroed out is the column that when replaced with zeros results in
the smallest the –norm of the solution. The evaluation is
repeated until columns are zeroed
out [SYSEQDOC].
By using rather than , we can find a least squares solution that minimizes the –norm of using the economy QR factorization as done with over-determined systems. We will call the factors of and . The orthogonality of and triangular structure of again yield an efficient solution algorithm.
In the following example, the solution using the left-divide operator is found first to see which columns should be replaced with zeros.
>> A
A =
2 4 20 3 16
5 17 2 20 17
19 11 9 1 18
>> b = [2; 20; 13];
>> y = A\b
y =
0.5508
0
0
0.7794
0.0975
To get the same result as the left-divide operator for this example, we replace the second and third columns with zeros.
>> A(:,2) = zeros(3,1);
>> A(:,3) = zeros(3,1)
A =
2 0 0 3 16
5 0 0 20 17
19 0 0 1 18
>> [Qt, Rt] = qr(A', 0);
>> x = Qt*(Rt'\b)
x =
0.5508
0
0
0.7794
0.0975
The following lines of code follow MATLAB’s published description of the
algorithm in an attempt to emulate mldivide
. The code often yields
the same solution as mldivide
. When the zeros are in different
positions, the solution is closer to the least squares solution than the
solution returned from mldivide
. Note that we must guard against the
matrix becoming rank deficient (), which can happen
when setting columns to zero.
nm = zeros(n, 1);
C = A;
for k = 1:n-m
for p = 1:n
B = C;
B(:,p) = zeros(m,1);
if rank(B) < m
nm(p) = inf;
else
[Qt, Rt] = qr(B', 0);
nm(p) = norm(Qt*(Rt'\b));
end
end
[~, idx] = sort(nm);
C(:,idx(k)) = zeros(m, 1);
end
[Qt, Rt] = qr(C', 0);
x = Qt*(Rt'\b);