9.6. Other Applications of the SVD¶
In our first introduction to the SVD (The Singular Value Decomposition), we describe how it is
used to find the inverse of a matrix, rank of a matrix, and the null solution
of a singular matrix equation. We also discussed calculating the pseudo-inverse
of rectangular matrices in Over-determined Pseudo-inverse and The Preferred Under-determined Solution. Here we list
five additional applications of the SVD to linear algebra—vector projections
with the economy SVD, finding orthogonal basis vectors from the SVD’s
factor, identifying the four fundamental subspaces from the SVD,
finding the condition number of a matrix, and the polar decomposition.
9.6.1. Projection and the Economy SVD¶
The SVD has application to vector projections as described in
Over-determined Systems and Vector Projections. We begin with an observation related to the extra
rows of zeros in the matrix for over-determined
systems. As discussed in SVD of Rectangular Matrices, some columns of the
matrix do not contribute to the final SVD product because
they get multiplied by zeros in
.
-
Over-determined Full SVD
-
Economy SVD
The economy SVD removes the unused columns of
and the rows of zeros in
.
The economy SVD is a valid factoring. The only noticeable application
difference is that is not unitary:
, but
. The economy
SVD is used to solve over-determined systems of equations
(
) and projection approximations.
Two pairs of matrices in the projection equation reduce to identity matrices.
As mentioned in Alternate Projection Equation, orthonormal basis vectors of the
matrix are needed for the projection. Either the modified
Gram–Schmidt algorithm, QR factorization, or the
from the economy SVD may be used. The MATLAB function
orth
uses the
economy SVD method to compute orthonormal basis vectors.
The fourProjections
script shows four ways to
achieve projection of an over-determined system. The plot of the
projections is shown in Four vector projection alternatives. The projection lines appear as
one line because they are on top of each other..
% File: four_projections.m
% Comparison of 4 ways to compute vector projections of an
% over-determined system.
%
%% Over-determined System with noise
t = linspace(0,20);
y = 10 - 0.75.*t + 5*randn(1,100);
b = y';
scatter(t,b)
A = ones(100,2); % A is the design matrix
A(:,2) = t';
%% basic pseudo-inverse projection onto the column space of A
x_hat = (A'*A)\(A'*b);
p1 = A*x_hat;
%% Alternate Gram-Schmidt
G = mod_gram_schmidt(A);
% u1 = G(:,1); % could use vectors for projection
% u2 = G(:,2);
% p2 = b'*u1*u1 + b'*u2*u2;
p2 = G*G'*b; % or matrix multiplication accomplishes the same
%% Econ SVD projection
[U, ~, ~] = svd(A, 'econ');
p3 = U*U'*b;
%% MATLAB's Orth function
O = orth(A); % O and U should be the same
p4 = O*O'*b;
%% Plot
figure, hold on, scatter(t, b)
plot(t, p1), plot(t, p2), plot(t, p3) plot(t, p4)
hold off
legend('Noisy Data', 'Onto Column Space', 'Gram-Schmidt', ...
'SVD', 'Orth function')
9.6.2. Condition Number¶
The singular values of a singular matrix will contain one or more zeros.
Likewise, matrices that are close to singular will contain near zero
singular values. As described in Matrix Condition Number, the solution to
a poorly conditioned matrix equation is sensitive to perturbations of
the elements of . Viewing the solution to
from the perspective of the outer
product of the SVD gives us an intuition into the sensitivity of
to perturbations in
[GOLUB13].
The scalar fractions are
dot products divided by singular values. Thus the magnitude of the
singular values has a significant impact on the sensitivity of the
problem. A matrix with singular
values close to zero is poorly conditioned.
The condition number of a matrix may be estimated
by the ratio of the largest and smallest singular values.
A full rank matrix will have a fairly small condition number. Singular
and near singular matrices will have condition numbers of infinity or
very large (several thousand). Thus the condition number is a quick
invertible test. To avoid division by zero, MATLAB uses the reciprocal
of the condition number. The rcond
function calculate an estimate
for the reciprocal condition number. If
is well conditioned,
rcond(A)
is near 1.0. If
is poorly conditioned,
rcond(A)
is near 0.
When using the left-divide operator to find the solution to a matrix equation, one may occasionally see a warning message such as follows.
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 3.700743e-18.
9.6.3. Polar Decomposition¶
There is another factoring of a matrix that uses the sub-matrices of the
SVD. The polar decomposition splits the matrix up into a symmetric
matrix and an orthogonal matrix. The factoring is
, which is intended to be a
generalization to matrices of the polar representation of vectors on a
complex plane,
, where
is the
scalar length of
and
gives the
direction of the vector according to Euler’s complex exponential
formula. In the polar decomposition,
is a unitary
rotation matrix, and
has the same
matrix norm as
. But with multiplication by a vector, the
matrix will both scale and rotate the vector. It can be
found by simply inserting an identity matrix in the form of
into the SVD equation.
The polar decomposition has application to computer graphics and materials engineering where it is used to decompose stress tensors [BUFFINGTON14].