.. _otherSVDapps: Other Applications of the SVD ============================= In our first introduction to the SVD (:ref:`SVDintro`), we describe how it is used to find the inverse of a matrix, rank of a matrix, and the null solution of a singular matrix equation. We also discussed calculating the pseudo-inverse of rectangular matrices in :ref:`SVDpinv` and :ref:`prefunder`. Here we list five additional applications of the SVD to linear algebra—vector projections with the economy SVD, finding orthogonal basis vectors from the SVD’s :math:`\bf{U}` factor, identifying the four fundamental subspaces from the SVD, finding the condition number of a matrix, and the polar decomposition. .. _econSVD: Projection and the Economy SVD ------------------------------ .. index:: economy SVD .. index:: orth .. index:: projection The SVD has application to vector projections as described in :ref:`projection`. We begin with an observation related to the extra rows of zeros in the :math:`\bf{\Sigma}` matrix for over-determined systems. As discussed in :ref:`SVDrectangular`, some columns of the :math:`\bf{U}` matrix do not contribute to the final SVD product because they get multiplied by zeros in :math:`\bf{\Sigma}`. .. describe:: Over-determined Full SVD .. math:: \mathbf{A} = \mat{ \mathbf{\tilde{U}} \mathbf{U}_{unused}} \mat{ \mathbf{\tilde{\Sigma}}; \mathbf{0}} \mathbf{V}^T .. index:: economy SVD .. index:: SVD!economy .. describe:: Economy SVD The economy SVD removes the unused columns of :math:`\bf{U}` and the rows of zeros in :math:`\bf{\Sigma}`. .. math:: \mathbf{A} = \mathbf{\tilde{U}} \mathbf{\tilde{\Sigma}} \mathbf{V}^T The economy SVD is a valid factoring. The only noticeable application difference is that :math:`\bf{\tilde{U}}` is not unitary: :math:`\mathbf{\tilde{U}}^T \mathbf{\tilde{U}} = \bf{I}`, but :math:`\mathbf{\tilde{U}} \mathbf{\tilde{U}}^T \ne \bf{I}`. The economy SVD is used to solve over-determined systems of equations (:math:`\mathbf{A}\,\bm{x} = \bm{b}`) and projection approximations. .. math:: \bm{\hat{x}} = \mathbf{A}^{+} \bm{b} = \mathbf{V}\,\mathbf{\tilde{\Sigma}}^{+} \mathbf{\tilde{U}}^T \bm{b} Two pairs of matrices in the projection equation reduce to identity matrices. .. math:: \begin{array}{rl} \bm{p} &= \mathbf{A}\,\bm{\hat{x}} \\ &= \mathbf{\tilde{U}\,\tilde{\Sigma}\,V}^T \mathbf{V\,\tilde{\Sigma}}^{+} \mathbf{\tilde{U}}^T \bm{b} \\ &= \mathbf{\tilde{U}\,\tilde{U}}^T \bm{b} \end{array} .. index:: orth As mentioned in :ref:`altProject`, orthonormal basis vectors of the :math:`\bf{A}` matrix are needed for the projection. Either the modified Gram–Schmidt algorithm, QR factorization, or the :math:`\bf{\tilde{U}}` from the economy SVD may be used. The MATLAB function ``orth`` uses the economy SVD method to compute orthonormal basis vectors. The ``fourProjections`` script shows four ways to achieve projection of an over-determined system. The plot of the projections is shown in :ref:`SVDprojection`. :: % File: four_projections.m % Comparison of 4 ways to compute vector projections of an % over-determined system. % %% Over-determined System with noise t = linspace(0,20); y = 10 - 0.75.*t + 5*randn(1,100); b = y'; scatter(t,b) A = ones(100,2); % A is the design matrix A(:,2) = t'; %% basic pseudo-inverse projection onto the column space of A x_hat = (A'*A)\(A'*b); p1 = A*x_hat; %% Alternate Gram-Schmidt G = mod_gram_schmidt(A); % u1 = G(:,1); % could use vectors for projection % u2 = G(:,2); % p2 = b'*u1*u1 + b'*u2*u2; p2 = G*G'*b; % or matrix multiplication accomplishes the same %% Econ SVD projection [U, ~, ~] = svd(A, 'econ'); p3 = U*U'*b; %% MATLAB's Orth function O = orth(A); % O and U should be the same p4 = O*O'*b; %% Plot figure, hold on, scatter(t, b) plot(t, p1), plot(t, p2), plot(t, p3) plot(t, p4) hold off legend('Noisy Data', 'Onto Column Space', 'Gram-Schmidt', ... 'SVD', 'Orth function') .. _SVDprojection: .. figure:: SVDprojection.png :align: center :width: 40% Four vector projection alternatives. The projection lines appear as one line because they are on top of each other. .. _condSVD: Condition Number ---------------- .. index:: condition number .. index:: invertible test .. index:: rcond The singular values of a singular matrix will contain one or more zeros. Likewise, matrices that are close to singular will contain near zero singular values. As described in :ref:`conditionNum`, the solution to a poorly conditioned matrix equation is sensitive to perturbations of the elements of :math:`\bm{b}`. Viewing the solution to :math:`\mathbf{A}\,\bm{x} = \bm{b}` from the perspective of the outer product of the SVD gives us an intuition into the sensitivity of :math:`\bm{x}` to perturbations in :math:`\bm{b}` [GOLUB13]_. .. math:: \begin{aligned} \bm{x} &= \mathbf{A}^{-1}\bm{b} = \left(\mathbf{U\,\Sigma\,V}^T\right)^{-1}\bm{b} \\ &= \sum_{i=1}^{n} \frac{\bm{u}_i^T \bm{b}}{\sigma_i}\bm{v}_i\end{aligned} The scalar fractions :math:`\left(\bm{u}_i^T \bm{b}\right)/\sigma_i` are dot products divided by singular values. Thus the magnitude of the singular values has a significant impact on the sensitivity of the :math:`\mathbf{A}\,\bm{x} = \bm{b}` problem. A matrix with singular values close to zero is poorly conditioned. The :math:`\norm{\cdot}_2` condition number of a matrix may be estimated by the ratio of the largest and smallest singular values. .. math:: C = \frac{\sigma_{max}}{\sigma_{min}} .. index:: rcond A full rank matrix will have a fairly small condition number. Singular and near singular matrices will have condition numbers of infinity or very large (several thousand). Thus the condition number is a quick invertible test. To avoid division by zero, MATLAB uses the reciprocal of the condition number. The ``rcond`` function calculate an estimate for the reciprocal :math:`\norm{\cdot}_1` condition number. If :math:`\bf{A}` is well conditioned, ``rcond(A)`` is near 1.0. If :math:`\bf{A}` is poorly conditioned, ``rcond(A)`` is near 0. When using the left-divide operator to find the solution to a matrix equation, one may occasionally see a warning message such as follows. Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 3.700743e-18. Polar Decomposition ------------------- There is another factoring of a matrix that uses the sub-matrices of the SVD. The *polar decomposition* splits the matrix up into a symmetric matrix and an orthogonal matrix. The factoring is :math:`\mathbf{A} = \mathbf{R\,Q}`, which is intended to be a generalization to matrices of the polar representation of vectors on a complex plane, :math:`\bm{z} = r\,e^{i\,\theta}`, where :math:`r` is the scalar length of :math:`\bm{z}` and :math:`e^{i\,\theta}` gives the direction of the vector according to Euler’s complex exponential formula. In the polar decomposition, :math:`\bf{Q}` is a unitary rotation matrix, and :math:`\bf{R}` has the same :math:`\norm{\cdot}_2` matrix norm as :math:`\bf{A}`. But with multiplication by a vector, the :math:`\bf{R}` matrix will both scale and rotate the vector. It can be found by simply inserting an identity matrix in the form of :math:`\mathbf{U}^T \mathbf{U}` into the SVD equation. .. math:: \begin{array}{rl} \mathbf{A} &= \mathbf{U\,\Sigma}\, \left(\mathbf{U}^T \mathbf{U}\right)\, \mathbf{V}^T \\ &= \left(\mathbf{U\,\Sigma\, U}^T\right) \left(\mathbf{U\, V}^T\right)\\ &= \mathbf{R\, Q} \\ & \hfill \\ \mathbf{R} &= \mathbf{U\,\Sigma\, U}^T \\ & \hfill \\ \mathbf{Q} &= \mathbf{U\, V}^T \end{array} The polar decomposition has application to computer graphics and materials engineering where it is used to decompose stress tensors [BUFFINGTON14]_.