6.13. Left-Divide Operator

We focused primarily on solutions to systems of linear equations in this chapter. We express systems of linear equations in terms of the \mathbf{A}\,\bm{x} = \bm{b} problem. A constant reference point in the discussion has been MATLAB’s left-divide or backslash operator (\), which efficiently finds \bm{x} for all but singular equations. The results and applications are more important to us than the details of MATLAB’s algorithms, but it is illuminating to briefly discuss the algorithms in light of what we have learned about systems of equations.

The mldivide function is invoked by the left-divide operator. The documentation does not tell us every secret behind the implementation of the algorithms, but it does tell us the basic strategies. Please take a look at the MATLAB documentation of mldivide. The documentation shows flow charts depicting how full and sparse matrices are evaluated in terms of size, shape, and special properties to determine the most efficient algorithm. Note that the work of mldivide is usually a progression from more complex problems toward simpler problems using matrix factorization. In particular, lower and upper triangular matrices are sought so that efficient triangular solvers can be used to find solutions by forward or back substitution.

Right Division

As noted in Matrix Division, MATLAB also has a right-divide operator (/). It provides the solution to \bm{x}\mathbf{A} = \bm{b}, where \bm{x} and \bm{b} are row vectors. We have seldom mentioned it for two reasons. First, the need for the right-divide operator is much less than for the left-divide operator, especially in regard to systems of equations. Secondly, right division is implemented using the left-divide operator with the relationship b/A = (A’\b’)’. So the algorithms reviewed here for the left-divide operator also apply to the right-divide operator.

6.13.1. Left-Divide of Critically-determined Systems

As discussed in LU Decomposition, when matrix \bf{A} is square, the primary algorithm is LU decomposition with row exchanges (partial pivoting). Matrices smaller than 16{\times}16 (real) or 8{\times}8 (complex) are always solved with LU decomposition. Other algorithms listed for larger square matrices take advantage of special structure features of the matrix to use more efficient matrix factorizations.

A square, full (not sparse) matrix is first checked to see if it is triangular or can be multiplied by a permutation matrix to become triangular. Triangular systems are solved with forward and back substitution.

Next it is checked to see if it is Hermitian ( Matrix Transpose Properties). If \mathbf{A} is Hermitian and positive definite then the symmetry allows quick factoring into the Cholesky decomposition \mathbf{A} = \mathbf{R}^T\,\mathbf{R}, where \bf{R} is upper triangular (Cholesky Decomposition for Positive Definite Systems). Cholesky decomposition is attempted if \bf{A} appears to be positive definite. If the Cholesky decomposition fails because \bf{A} is actually not positive definite, then LU decomposition is used.

Note

MATLAB checks if the diagonal values of a Hermitian matrix are all positive or all negative, which is a necessary but not sufficient requirement for being positive definite. If the diagonal values are all negative, then the negative of \bf{A} is used, which may be positive definite. If the first two tests pass, then the Cholesky decomposition is attempted. See Cholesky Decomposition for Positive Definite Systems for more about positive definite matrices and the Cholesky decomposition.

A square matrix that is not triangular or Hermitian is solved with either LU decomposition as presented in LU Decomposition or with a variation of the LU decomposition algorithm that takes advantage of the structure of upper Hessenberg matrices. An upper Hessenberg matrix is one diagonal off from being upper triangular. It contains zeros below the first subdiagonal (just below the main diagonal). It may seem obscure to have a special solver for Hessenberg matrices, but they occur in many problems. Upper Hessenberg matrices also include the class of tridiagonal matrices that only contain nonzero elements on the main diagonal and the diagonals immediately above and below the main diagonal. Heat conduction and fluid flow problems often result in tridiagonal matrices [BUTT10].

\begin{array}{cccc}
  &\mat{a b c d; e f g h; 0 i j k; 0 0 l m} &\hspace{1cm}
  &\mat{a b 0 0; c d e 0; 0 f g h; 0 0 i j} \\
    & \hfill & \hfill & \hfill \\
  &\text{Upper Hessenberg} & \hfill &\text{Tridiagonal}
\end{array}

6.13.2. Left-Divide of Over-determined Systems

As discussed in Over-determined Systems and Vector Projections, the solution to over-determined systems is based on vector projections. It is an exact solution in the rare event that vector \bm{b} is in the columns space of \bf{A}, but it is usually an approximation. We have shown how the least squares solution is found from the vector projection equation \bm{x} =
\left(\mathbf{A}^T\mathbf{A}\right)^{-1}\mathbf{A}^T\bm{b} or from the pseudo-inverse (\bm{x} = \mathbf{A}^+\bm{b}), which MATLAB finds from the economy SVD. However, the mldivide function uses QR factorization to find the least squares solution. The QR algorithm is described in QR Decomposition. It is a matrix factoring returning an orthogonal matrix, \bf{Q}, and a upper triangular matrix, \bf{R}, such that \mathbf{A} = \bf{Q\,R}. It is an efficient and accurate algorithm. Using the result to find the solution to our system of equations is also quick because \bf{Q} is orthogonal and \bf{R} is upper triangular. So finding the solution only requires a matrix transpose, a matrix–vector multiplication, and back substitution with the triangular solver.

\begin{aligned}
        &\mathbf{A} = \mathbf{Q\,R} \\
        &\mathbf{A}\,\bm{x} = \mathbf{Q\,R}\,\bm{x} = \bm{b}  \\
        &\bm{x} = \mathbf{R}^{-1} \mathbf{Q}^T \bm{b}
    \end{aligned}

Note that for an over-determined matrix, the \bf{R} matrix will end with m - n rows of all zeros, so we add an extra zero argument to qr to use the economy QR, which removes those rows. Here is a quick example:

>> A = randi(20, 5, 2) - 10;
>> b = randi(20, 5, 1) - 10;
>> x = A\b              % The left-divide solution
x =
    0.6383
    0.1901

>> [Q, R] = qr(A, 0);     % Find the solution by QR
>> x = R\(Q'*b)
x =
    0.6383
    0.1901

6.13.3. Left-Divide of Under-determined Systems

As discussed in Under-determined Systems, the solution to under-determined systems is an infinite set rather than a unique solution. So any solution satisfying the equation \mathbf{A}\,\bm{x} = \bm{b} is correct, but some solutions are better than others. The minimum length correct solution is desired. The documentation for mldivide states that the returned solution has at most r (the rank of \bf{A}, which is usually m) nonzero values. To produce the zero value results, it replaces n - r columns of \bf{A} with zeros before using the QR factorization. The columns to zero out are found sequentially. The first column zeroed out is the column that when replaced with zeros results in the smallest the l_2–norm of the solution. The evaluation is repeated until n - r columns are zeroed out [SYSEQDOC].

By using \mathbf{A}^T rather than \bf{A}, we can find a least squares solution that minimizes the l_2–norm of \bm{x} using the economy QR factorization as done with over-determined systems. We will call the factors of \mathbf{A}^T \mathbf{Q}_t and \mathbf{R}_t. The orthogonality of \mathbf{Q}_t and triangular structure of \mathbf{R}_t again yield an efficient solution algorithm.

\begin{aligned}
  &\mathbf{A}^T = \mathbf{Q}_t\, \mathbf{R}_t \\
  &\mathbf{A} = \left(\mathbf{A}^T\right)^T
      = \left(\mathbf{Q}_t\, \mathbf{R}_t\right)^T
      = \mathbf{R}_t^T\,\mathbf{Q}_t^T  \\
  &\mathbf{A}\,\bm{x}  = \mathbf{R}_t^T\,\mathbf{Q}_t^T\,\bm{x}  \\
  &\bm{x} = \mathbf{Q}_t\, \left(\mathbf{R}_t^T\right)^{-1}\,\bm{b}
    \end{aligned}

In the following example, the solution using the left-divide operator is found first to see which columns should be replaced with zeros.

>> A
A =
    2     4    20     3    16
    5    17     2    20    17
   19    11     9     1    18
>> b = [2; 20; 13];

>> y = A\b
y =
    0.5508
    0
    0
    0.7794
    0.0975

To get the same result as the left-divide operator for this example, we replace the second and third columns with zeros.

>> A(:,2) = zeros(3,1);
>> A(:,3) = zeros(3,1)
A =
    2     0     0     3    16
    5     0     0    20    17
   19     0     0     1    18

>> [Qt, Rt] = qr(A', 0);
>> x = Qt*(Rt'\b)
x =
    0.5508
    0
    0
    0.7794
    0.0975

The following lines of code follow MATLAB’s published description of the algorithm in an attempt to emulate mldivide. The code often yields the same solution as mldivide. When the zeros are in different positions, the solution is closer to the least squares solution than the solution returned from mldivide. Note that we must guard against the matrix becoming rank deficient (r < m), which can happen when setting columns to zero.

nm = zeros(n, 1);
C = A;
for k = 1:n-m
    for p = 1:n
        B = C;
        B(:,p) = zeros(m,1);
        if rank(B) < m
            nm(p) = inf;
        else
            [Qt, Rt] = qr(B', 0);
            nm(p) = norm(Qt*(Rt'\b));
        end
    end
    [~, idx] = sort(nm);
    C(:,idx(k)) = zeros(m, 1);
end
[Qt, Rt] = qr(C', 0);
x = Qt*(Rt'\b);