8.4. Properties of Eigenvalues and Eigenvectors

Here is a summary of basic properties of eigenvalues and eigenvectors.

  • Eigenvalues and eigenvectors are properties of the matrix alone.

  • Only square matrices have eigenvalues and eigenvectors.

  • A matrix is singular if, and only if, one or more of its eigenvalues are equal to zero. The characteristic matrix has a shift along the diagonal such that it is singular \mathbf{C}_i = \mathbf{A} - \lambda_i\,\mathbf{I}. So if \bf{A} is singular, then \lambda_i = 0.

  • An n{\times}n matrix has n eigenvalues and may have up to n independent eigenvectors if the eigenvalues are unique.

    They are in matched pairs, called eigenpairs. If there are repeating eigenvalues, then there will also be repeating eigenvectors. Such cases are called degenerate because the matrix does not have linearly independent eigenvectors, and thus can not be factored using diagonalization, which is described in Diagonalization. When the eigenvalues of a matrix are distinct, the eigenvectors form a set of linearly independent vectors, which is proven in Linearly Independent Eigenvectors.

  • Eigenvectors are invariant to scale.

    The product of an eigenvector and a scalar constant, c, is also an eigenvector. So technically, a matrix has an infinite number of eigenvectors, but has at most n independent eigenvectors. It is the direction of the eigenvector that matters, not the magnitude. The eigenpair equation has the same eigenvector on both sides of the equality, making them invariant to scale. [1]

    \mathbf{A}\,(c\,\bm{x}) = \lambda\,(c\,\bm{x})

  • They are often complex.

    Eigenvalues are often found to be complex numbers, which also means that the eigenvectors will have complex numbers.

  • Symmetric matrices have real eigenvalues and eigenvectors.

  • If a matrix is invertible and symmetric, then each eigenvalue of the matrix inverse is the reciprocal of an eigenvalue of the matrix. If we use the notation \lambda_i(\mathbf{A}) to mean the ith eigenvalue of \mathbf{A}, then

    (8.4)\lambda_i(\mathbf{A}^{-1}) = \frac{1}{\lambda_i(\mathbf{A})}.

  • Symmetric matrices have orthogonal eigenvectors.

    A proof is given in Spectral Decomposition Theorem.

  • Eigenvalues are not unique to a matrix.

    Matrices with the same eigenvalues are said to be similar. The proof of existence and requirements for similar matrices are given in Similarity Transformations. Similar matrices are an important part of how MATLAB finds eigenvalues. Through a sequence of matrix conversions based on the QR factorization, a similar upper triangular matrix is found with the eigenvalues on the diagonal.

  • Two eigenvalue properties are corollary results of the Schur Triangularization Theorem, which states that there is a factoring of any square matrix of the form \mathbf{A} = \m{Q\,T\,Q}^H, where \bf{Q} and its Hermitian transpose are unitary, and \bf{T} is upper triangular with the eigenvalues on the diagonal. The Schur decomposition is found from iterative orthogonal algorithms in the calculation of the eigenvalues and eigenvectors (The Schur Decomposition). The columns of \bf{Q} are called the Schur vectors and by the spectral decomposition (Spectral Decomposition Theorem) they are the eigenvectors of \bf{A} when \bf{A} is Hermitian symmetric (\mathbf{A}
= \mathbf{A}^H). The Schur decomposition gives us a couple of interesting results. The trace and determinate of \bf{A} are the same as that of the upper triangular \bf{T} since \bf{Q} and \bf{Q}^H are unitary. Thus we have the following two eigenvalue properties.

  • The eig function:

    • MATLAB returns the eigenvectors as unit vectors. This provides a convenience in some applications.

    • When no returned variables are saved from eig, it returns the eigenvalues.

    • To save both the eigenvectors (X) and eigenvalues (L), use:

      >> [X, L] = eig(A)
      
    • The columns of X are the eigenvectors.

    • The eigenvalues are on the diagonal of L. To get a vector of the eigenvalues, use l = diag(L).

Footnote:

[1]In the course of computing eigenvectors that are complex, two algorithms may yield eigenvectors that appear completely different but they are complex scalar multiples of each other.