8.5. Diagonalization and Powers of A

Here, we will establish a factoring of a matrix based on its eigenvalues and eigenvectors. The factoring is called diagonalization or the eigendecomposition. Then, we will use the factoring to find a computationally efficient way to compute powers of the matrix (\(\mathbf{A}^k\)).

8.5.1. Diagonalization

The eigenpairs of the \(n{\times}n\) matrix \(\bf{A}\) give us \(n\) equations.

\[\begin{split}\begin{array}{rl} \mathbf{A}\,\bm{x}_1 &= \lambda_1\,\bm{x}_1 \\ \mathbf{A}\,\bm{x}_2 &= \lambda_2\,\bm{x}_2 \\ &\vdots \\ \mathbf{A}\,\bm{x}_n &= \lambda_n\,\bm{x}_n \\ \end{array}\end{split}\]

We wish to combine the \(n\) equations into one matrix equation. Consider the product of \(\bf{A}\) and the \(\bf{X}\) matrix containing the eigenvectors.

\[\begin{split}\begin{array}{ll} \mathbf{A\,X} &= \mathbf{A} \begin{bmatrix} \vertbar{} & \vertbar{} & {} & \vertbar{} \\ \bm{x}_1 & \bm{x}_2 & \cdots{} & \bm{x}_n \\ \vertbar{} & \vertbar{} & {} & \vertbar{} \end{bmatrix} \\ &{} \\ &= \begin{bmatrix} \vertbar{} & \vertbar{} & {} & \vertbar{} \\ {\lambda_1\,\bm{x}_1} & {\lambda_2\,\bm{x}_2} & \cdots{} & {\lambda_n\,\bm{x}_n} \\ \vertbar{} & \vertbar{} & {} & \vertbar{} \end{bmatrix} \\[18pt] &= \begin{bmatrix} \vertbar{} & \vertbar{} & {} & \vertbar{} \\ \bm{x}_1 & \bm{x}_2 & \cdots{} & \bm{x}_n \\ \vertbar{} & \vertbar{} & {} & \vertbar{} \end{bmatrix} \begin{bmatrix} \lambda_1 & 0 & \cdots{} & 0 \\ 0 & \lambda_2 & \cdots{} & 0 \\ \vdots{} & \vdots{} & \ddots{} & \vdots{} \\ 0 & 0 & \cdots{} & \lambda_n \end{bmatrix} \\ &= \mathbf{X\,\Lambda} \end{array}\end{split}\]

The matrix \(\bf{\Lambda}\) is a diagonal eigenvalue matrix. If the matrix has linearly independent eigenvectors, which will be the case when all eigenvalues are different (no repeating \(\lambda\)s), then \(\bf{X}\) is invertible.

\[\begin{split}\begin{array}{c} \mathbf{A\,X} = \mathbf{X\,\Lambda} \\ \mathbf{X}^{-1}\mathbf{A\,X} = \mathbf{\Lambda} \\ {}\\ \boxed{\mathbf{A} = \mathbf{X}\,\mathbf{\Lambda}\,\mathbf{X}^{-1}} \end{array}\end{split}\]

8.5.1.1. When Does Diagonalization Not Work?

Notice that the inverse of the eigenvector matrix must exist for the eigendecomposition to be valid. So each eigenvector must be independent of the other eigenvectors.

One example of a matrix with repeating eigenvectors is a \(2{\times}2\) matrix with the same values on the forward diagonal and a zero on the backward diagonal. The determinant of the characteristic matrix has repeating roots, so it will also have repeating eigenvectors. Thus, the eigenvector matrix is singular and may not be inverted.

\[\begin{split}\begin{bmatrix} {a-\lambda} & n \\ 0 & {a-\lambda} \end{bmatrix} = (a - \lambda) (a - \lambda)\end{split}\]
>> A = [5 7; 0 5]
A =
    5     7
    0     5
>> [X, L] = eig(A)
X =
    1.0000   -1.0000
         0    0.0000
L =
    5     0
    0     5
>> rank(X)
ans =
    1

8.5.1.2. Diagonalization of a Symmetric Matrix

Symmetric matrices have a simplified diagonalization because the matrix of eigenvectors of a symmetric matrix, \(\bf{S}\), is orthogonal (Schur Decomposition of Symmetric Matrices). Recall also from Matrix Transpose Properties that from the spectral theorem, orthogonal matrices have the property \(\mathbf{Q}^{T} = \mathbf{Q}^{-1}\). Thus, the diagonalization of a symmetric matrix is shown below and is also called the spectral decomposition of a symmetric matrix.

\[\boxed{\mathbf{S} = \mathbf{Q\,\Lambda\,Q}^T}.\]

Note

Recall that the columns of orthonormal matrices must be unit vectors (length of 1), which the eig function returns.

8.5.2. Powers of A

How does one compute matrix \(\bf{A}\) raised to the power \(k\) (\(\mathbf{A}^k\))?

The brute force approach is to multiply \(\bf{A}\) by itself \(k-1\) times, which is slow if \(k\) and the size of the matrix (\(n\)) are large. If we are clever about the order of the multiplications, it could be reduced to \(\log_{2}(k)\) matrix multiplications. That is:

\[\mathbf{A}^k = \underbrace{\underbrace{\underbrace{ \mathbf{A\,A}}_{\mathbf{A}^2}\, \mathbf{A}^2}_{\mathbf{A}^4} \, \mathbf{A}^4 \ldots \mathbf{A}^{k/2}}_{\mathbf{A}^{k}}.\]

Diagonalization allows us to compute \(\mathbf{A}^k\) faster.

\[\begin{split}\begin{array}{rcl} \mathbf{A}^2 &= \mathbf{X\,\Lambda\,X}^{-1}\: \mathbf{X\,\Lambda\,X}^{-1} &= \mathbf{X}\,\mathbf{\Lambda}^2\,\mathbf{X}^{-1} \\ \mathbf{A}^3 &= \mathbf{X}\,\mathbf{\Lambda}^2\,\mathbf{X}^{-1}\: \mathbf{X\,\Lambda\,X}^{-1} &= \mathbf{X}\,\mathbf{\Lambda}^3\,\mathbf{X}^{-1} \\ \end{array}\end{split}\]
\[\boxed{\mathbf{A}^k = \mathbf{X\,\Lambda}^k\,\mathbf{X}^{-1}}\]

Because the \(\bf{\Lambda}\) matrix is diagonal, only the individual \(\lambda\) values need be raised to the \(k\) power (element-wise exponent).

\[\begin{split}\mathbf{\Lambda}^k = \begin{bmatrix} \lambda_1^k & 0 & \cdots{} & 0 \\ 0 & \lambda_2^k & \cdots{} & 0 \\ \vdots{} & \vdots{} & \ddots{} & \vdots{} \\ 0 & 0 & \cdots{} & \lambda_n^k \end{bmatrix}\end{split}\]

Example Power of A

In this example, the matrix is symmetric, so the inverse simplifies to a transpose.

>> S = gallery('moler', 4)
S =
     1    -1    -1    -1
    -1     2     0     0
    -1     0     3     1
    -1     0     1     4
>> [Q, L] = eig(S);
>> norm(((Q * L.^5 * Q') - S^5), 'fro')
ans =
   6.2428e-12