.. _FrancisEigenvectors: Eigenvectors Following the Schur Decomposition ---------------------------------------------- .. index:: computing eigenvectors .. index:: eigenvectors .. index:: null space .. index:: null .. index:: svd After the Schur decomposition from either the QR or Francis algorithm using Francis steps of degree one, the Schur vectors, :math:`\bf{Q}`, are used when we find the eigenvectors. Only the eigenvectors, :math:`\bf{V}`, of the upper triangular matrix :math:`\bf{T}` are needed to compute the eigenvectors of :math:`\bf{A}`, which is implemented in function ``UTeigVect``. The final eigenvectors matrix is found with multiplication. .. math:: \mathbf{X} = \mathbf{Q\,V} .. index:: characteristic matrix Again, we need to find the null solution of each characteristic matrix. But the triangular structure of :math:`\bf{T}` allows for a more efficient solution than finding the SVDs. Let :math:`\mathbf{C}_k` be the :math:`k`\ th characteristic matrix, :math:`\mathbf{C}_k = \mathbf{T} - \lambda_i\,\mathbf{I}`. Because the eigenvalues are on the diagonal of :math:`\bf{T}`, the :math:`k`\ th diagonal element of :math:`\mathbf{C}_k` is zero. We will partition :math:`\mathbf{C}_k` such that :math:`\mathbf{S}_{11}` is size :math:`k{\times}k` with a zero in the last diagonal position. .. math:: :label: eq-eigvect1 \mathbf{C}_k = \mat{\mathbf{S}_{11} \mathbf{S}_{12}; \mathbf{0} \mathbf{S}_{22}}, \qquad \mat{\mathbf{S}_{11} \mathbf{S}_{12}; \mathbf{0} \mathbf{S}_{22}} \, \vector{\bm{v}_1; \bm{v}_2} = \vector{\bm{0}; \bm{0}} :math:`\mathbf{S}_{11}` and :math:`\mathbf{S}_{22}` are upper triangular, while :math:`\mathbf{S}_{12}` is square. :math:`\mathbf{S}_{12}` is non-singular because all its diagonal elements are nonzero. Therefore, :math:`\bm{v}_2` must equal zero. So equation :eq:`eq-eigvect1` is reduced to :math:`\mathbf{S}_{11}\,\bm{v}_1 = \bm{0}`. We can further partition :math:`\mathbf{S}_{11}` to separate its last row and column from a, non-singular, upper triangular matrix :math:`\bf{S}`. .. math:: \mathbf{S}_{11} = \mat{\mathbf{S} \bm{r}; \bm{0} 0}, \qquad \mat{\mathbf{S} \bm{r}; \bm{0} 0}\,\vector{\hat{\bm{v}}_1; 1} = \vector{\bm{0}; \bm{0}} The solution may then be found with back substitution of the upper triangular matrix :math:`\bf{S}` with :math:`\bm{r}`. .. math:: \mathbf{S}\,\hat{\bm{v}}_1 + \bm{r} = \bm{0}, \qquad \hat{\bm{v}}_1 = -\mathbf{S}^{-1}\,\bm{r}, where :math:`\mathbf{S} \in \mathbb{C}^{(k-1){\times}(k-1)}`, :math:`\bm{r} \in \mathbb{C}^{(k-1){\times}1}`, and :math:`\hat{\bm{v}}_1 \in \mathbb{C}^{(k-1){\times}1}`. The :math:`j`\ th element of the :math:`k`\ th eigenvector of :math:`\bf{T}` is then given by .. math:: v_{j,k} = \begin{cases} \left(-\mathbf{S}_k^{-1}\,\bm{r}_k\right)_j&\text{for$j\in\{1,\cdots,k-1\}$} \\ 1& \text{$j = k$} \\ 0& \text{for $j \in \{k+1, \cdots, n\}$}. \end{cases} Finally, each eigenvector is scaled to a unit vector. .. index:: UTeigVect :: function V = UTeigVect(T) % UTEIGVECT - Find the eigenvectors of upper triangular matrix T. % The eigenvalues should be supplied on the diagonal of T. % The structure of T allows finding the eigenvectors with triangular % back substitution rather than as the null solution of the % characteristic equation. n = size(T,2); V = eye(n); I = eye(n); for k = 2:n C = T - T(k,k)*I; S = C(1:k-1, 1:k-1); r = C(1:k-1, k); % solve with triangular back substitution V(1:k-1, k) = -S/r; V(1:k, k) = V(1:k, k)/norm(V(1:k, k)); end end .. index:: testing eigensystem correctness .. note:: To test the correctness of the eigendecomposition, the MATLAB command ``A*X - X*L`` should produce a matrix with values close to zero. The eigenvalues should be the same as returned from MATLAB’s ``eig`` function. However, complex eigenvectors can be different and still be correct because they may be multiplied by a complex scalar. Eigenvectors are invariant to scale. That is to say that if :math:`\bm{x}` is an eigenvector of :math:`\mathbf{A}`, so is :math:`c\,\bm{x}` where :math:`c` is any scalar constant. For convenience to users, functions that return eigenvectors should always return unit length vectors. For real eigenvectors, two different functions might return vectors multiplied by -1 of each other. That is easy to detect. But complex eigenvectors could be multiplied by any unit length number on the complex plane (:math:`e^{i\,\theta}`) where :math:`\theta \in \{0 \text{ to } 2\,\pi\}`. Thus, two correct sets of eigenvectors can appear different. Instead, enter the command ``norm(A*X - X*L, ’fro’)``, which will return a number close to zero if the eigenvalues and eigenvectors are correct.