12.4.12. Eigenvectors Following the Schur Decomposition¶
We find the eigenvectors after the Schur decomposition from either the
QR or Francis algorithm using the Schur vectors, \(\bf{Q}\), and the
eigenvectors, \(\bf{V}\), of the upper triangular matrix
\(\bf{T}\). The eigenvectors of \(\bf{T}\) are found by the
UTeigVect
function. Multiplication then finds the final eigenvector
matrix.
Again, we need to find the null solution of each characteristic matrix. But the triangular structure of \(\bf{T}\) allows for a more efficient solution than finding the SVDs. Let \(\mathbf{C}_k\) be the \(k\)th characteristic matrix, \(\mathbf{C}_k = \mathbf{T} - \lambda_i\,\mathbf{I}\). Because the eigenvalues are on the diagonal of \(\bf{T}\), the \(k\)th diagonal element of \(\mathbf{C}_k\) is zero. We will partition \(\mathbf{C}_k\) such that \(\mathbf{S}_{11}\) is size \(k{\times}k\) with a zero in its last diagonal position.
\(\mathbf{S}_{11}\) and \(\mathbf{S}_{22}\) are upper triangular, while \(\mathbf{S}_{12}\) is square. \(\mathbf{S}_{12}\) is non-singular because all its diagonal elements are nonzero. Therefore, \(\bm{v}_2\) must equal zero. So equation (12.14) is reduced to \(\mathbf{S}_{11}\,\bm{v}_1 = \bm{0}\). We can further partition \(\mathbf{S}_{11}\) to separate its last row and column from a \((k-1){\times}(k-1)\), non-singular, upper triangular matrix \(\bf{S}\).
The solution may then be found with back substitution of the upper triangular matrix \(\bf{S}\) with \(\bm{r}\).
where \(\mathbf{S} \in \mathbb{C}^{(k-1){\times}(k-1)}\), \(\bm{r} \in \mathbb{C}^{(k-1){\times}1}\), and \(\hat{\bm{v}}_1 \in \mathbb{C}^{(k-1){\times}1}\). The \(j\)th element of the \(k\)th eigenvector of \(\bf{T}\) is then given by
Finally, each eigenvector is scaled to a unit vector.
function V = UTeigVect(T)
% UTEIGVECT - Find the eigenvectors of the upper triangular
% matrix T. The eigenvalues should be supplied on the
% diagonal of T.
% The structure of T allows finding the eigenvectors with
% triangular back substitution rather than the null
% solution of the characteristic equation.
n = size(T,2);
V = eye(n);
I = eye(n);
for k = 2:n
C = T - T(k,k)*I;
S = C(1:k-1, 1:k-1);
r = C(1:k-1, k);
% solve with triangular back substitution
V(1:k-1, k) = -S\r;
V(1:k, k) = V(1:k, k)/norm(V(1:k, k));
end
end
12.4.13. Testing Eigensystem Correctness¶
To test the correctness of the eigendecomposition, the MATLAB command
A*X - X*L
should produce a matrix with values close to zero. The
eigenvalues should be the same as returned from MATLAB’s eig
function. However, complex eigenvectors can be different and still be
correct because they may be multiplied by a complex scalar. Eigenvectors
are invariant to scale. That is to say that if \(\bm{x}\) is an
eigenvector of \(\m{A}\), so is \(c\,\bm{x}\) where \(c\) is
any scalar constant. Functions that return eigenvectors should always
return unit-length vectors. For real eigenvectors, two different
functions might return vectors scaled by -1 of each other. That is easy
to detect. But complex eigenvectors could be multiplied by any unit
length number on the complex plane (\(e^{i\,\theta}\)) where
\(\theta \in \{0 \text{ to } 2\,\pi\}\).
Thus, two correct sets of eigenvectors can appear
different. So to test an eigendecomposition, enter the command
norm(A*X - X*L, ’fro’)
, which will return a number close to zero if
the eigenvalues and eigenvectors are correct.