12.4.5. Iterative QR Algorithm

The iterative QR algorithm is a well-known algorithm for finding the eigenvalues of a matrix. We are specifically referring to the explicitly shifted QR algorithm. To be more precise, the QR algorithm moves a matrix from upper Hessenberg form to upper triangular form with the eigenvalues on the diagonal. The remaining algorithm steps build the diagonal eigenvalue matrix, \bf{\Lambda}, and the matrix consisting of the eigenvectors, \bf{X}. The iterative algorithm relies on the QR matrix factorization (QR Decomposition).

We begin by describing the general concept of the algorithm before introducing refinements. First, use QR factorization, \mathbf{A} = \mathbf{Q\,R}, to find a new matrix \mathbf{A}_1 = \mathbf{R\,Q}. The original \bf{A} matrix and the modified \mathbf{A}_1 matrix are similar matrices. Then QR factorization is repeated on \mathbf{A}_1 to find \mathbf{A}_2 =
\mathbf{R}_1\,\mathbf{Q}_1 continuing until \mathbf{A}_k is upper triangular, and additional iterations will not change the result. As the \mathbf{A}_k matrix converges toward upper triangular at each iteration, the \bf{Q} matrix converge toward the identity matrix. When \mathbf{A}_k is upper triangular, the eigenvalues will be on its diagonal.

Since \mathbf{A} = \mathbf{Q\,R}, then \mathbf{R} = \mathbf{Q}^H\,\mathbf{A} and \mathbf{R\,Q} = \mathbf{Q}^H\,\mathbf{A\,Q} establishes similarity between \bf{A} and \m{A_{i+1}} = \m{R}_i\, \m{Q}_i.

12.4.5.1. Hessenberg Form, Wilkinson Shift, and Deflation

Three algorithm refinements lend to faster convergence. First, convert the matrix to upper Hessenberg form by similarity transforms ( Hessenberg Decomposition). The second refinement reduces the diagonal by a scalar shift, \rho, before each QR decomposition. [1] The shift is added back to the diagonal values after the decomposition. When the next eigenvalue to find is real, the value of the last diagonal element works for the shift. A better approach that finds complex eigenvalues is to calculate the two eigenvalues of the 2{\times}2 lower-right submatrix. The eigenvalue closest to the last diagonal value is selected for the shift. This is called a Wilkinson shift [WILKINSON65]. The algorithm cannot find the complex eigenvalues of a real matrix without the shift. Suppose a pair of complex conjugate eigenvalues is found from the 2{\times}2 submatrix. In that case, the complex eigenvalues will be on the diagonal with repeated iterations as the subdiagonal elements converge to zero. The algorithm finds eigenvalue \lambda_k when the element at \m{A}(k, k-1), converges to within the tolerance value to zero. The third algorithm refinement is deflation, which reduces the size of the matrix to exclude the last row after finding each eigenvalue.

Note

If we only wanted to find the eigenvalues, then we would also remove the last column with deflation. However, if we want to find the eigenvectors from the Schur vectors and the eigenvectors of the upper triangular matrix, then we need to keep the columns as part of what is passed the QR decomposition function. The multiplications that happen during the QR decomposition need to affect everything above the diagonal. Thus, deflation causes us to perform the QR decomposition on underdetermined matrices.

\begin{aligned}
        &\mathbf{A}_{j-1} - \rho\,\mathbf{I} = \mathbf{Q\,R} \\
        &\mathbf{A}_j = \mathbf{R\,Q} + \rho\,\mathbf{I}
    \end{aligned}

function [lambda1, lambda2] = eig22(A)
% EIG22 - Quick eigenvalue calculation of a 2-by-2 matrix
% [lambda1, lambda2] = EIG22(A) returns the eigenvalues in two
% variables.

    m = (A(1,1) + A(2,2))/2;
    p = A(1,1)*A(2,2) - A(1,2)*A(2,1);
    s = sqrt(m^2 - p);
    lambda1 = m + s;
    lambda2 = m - s;
end

12.4.5.2. The eigQR Function

The eigQR function is a simple implementation of the iterative QR algorithm that finds eigenvalues and eigenvectors. It first finds the Schur decomposition.

function [X, Lambda] = eigQR(A)
% EIGQR - Iterative QR algorithm to find eigenvalues. This
%         is the explicitly-shifted QR algorithm.
%
% A - Square matrix
% EIGQR(A) or L = EIGQR(A) returns the eigenvalues of A.
% [X, L] = EIGQR(A) returns the eigenvectors of A as the columns
%        of the X matrix and the eigenvalues of A are along the diagonal
%        of matrix L.
%
% NOTE: This code is for educational purposes.
% For other purposes, use MATLAB's eig function.
%
% See also: EIG, EIG22, HESS, QR, EIGFRANCIS, UTEIGVECT

    [m, n] = size(A);
    if m ~= n
        error("Matrix must be square to find eigenvalues")
    end
    % Start with Hessenberg - zeros below first
    [Q,T] = hess(A);
    tol = 1e-14;
    eigs = zeros(n,1);
    k = n;
    I = eye(n);
    while k > 1
        [rho1, rho2] = eig22(T(k-1:k,k-1:k));
        if abs(T(k,k)-rho1) < abs(T(k,k)-rho2)
            shift = rho1*I; else, shift = rho2*I;
        end
        % MATLAB's qr is faster, but qrFactor allows for
        % breakpoints to understand the algorithm better.
        B = T(1:k,1:n) - shift;
        [Qi, R] = qrFactor(B); % qr(B);
        Qn = eye(n);
        Qn(1:k,1:k) = Qi;
        T(1:k,1:n) = R*Qn + shift;
        Q = Q*Qn;
        % Converge when kth subdiagonal near zero
        if abs(T(k,k-1)) < tol*(abs(T(k,k)) + abs(T(k-1,k-1)))
            T(k,k-1) = 0;
            eigs(k) = T(k,k); % kth eigenvalue
            k = k - 1;        % deflation
            I = eye(k, n);
        end
    end
    eigs(1) = T(1,1);
    if nargout <= 1 % only want eigenvalues
        X = sort(eigs);
    else            % eigenvectors also
        Lambda = diag(eigs);
        V = UTeigVect(T);
        X = Q*V;
    end
end
[1]Since the shift changes the matrix before each QR decomposition, the \bf{Q} matrices can not be used to find the eigenvectors.