8.4. Properties of Eigenvalues and Eigenvectors

Here is a summary of the basic properties of eigenvalues and eigenvectors.

  • Eigenvalues and eigenvectors are properties of the matrix alone.

  • Only square matrices have eigenvalues and eigenvectors.

  • A matrix is singular if, and only if, one or more of its eigenvalues are equal to zero. The characteristic matrix has a shift along the diagonal such that it is singular \(\mathbf{C}_i = \mathbf{A} - \lambda_i\,\mathbf{I}\). So if \(\bf{A}\) is already singular, then \(\lambda_i = 0\). The number of nonzero eigenvalues is the rank of the matrix. MATLAB’s rank function uses the singular values from the SVD (Singular Value Decomposition (SVD)) instead of the eigenvalues because the modern SVD algorithm is faster.

  • An \(n{\times}n\) matrix has \(n\) eigenvalues and may have up to \(n\) independent eigenvectors if the eigenvalues are unique.

    Matrices with repeating eigenpairs are called degenerate because the matrix does not have linearly independent eigenvectors, and thus can not be factored using diagonalization (Diagonalization). The eigenvectors form a set of linearly independent vectors when the eigenvalues of a matrix are distinct, which is proven in Linearly Independent Eigenvectors Theorem.

  • Eigenvectors are invariant to scale.

    The product of an eigenvector and a scalar constant, \(c\), is also an eigenvector. It is the direction of the eigenvector that matters, not the magnitude. The eigenpair equation has the same eigenvector on both sides of the equality, making them invariant to scale. Two sets of complex eigenvectors may appear completely different, but can both be correct if they are complex scalar multiples of each other.

    \[\mathbf{A}\,(c\,\bm{x}) = \lambda\,(c\,\bm{x})\]
  • They are often complex.

    Eigenvalues are often found to be complex numbers, which means that the eigenvectors will be complex numbers.

  • Symmetric matrices have real eigenvalues and eigenvectors.

  • If a matrix is invertible and symmetric, then each eigenvalue of the matrix inverse is the reciprocal of an eigenvalue of the matrix. This result is important for calculating a matrix inverse from the SVD. Using the notation \(\lambda_i(\mathbf{A})\) to mean the \(i\)th eigenvalue of \(\mathbf{A}\), then

    (8.4)\[\lambda_i(\mathbf{A}^{-1}) = \frac{1}{\lambda_i(\mathbf{A})}.\]
  • Symmetric matrices have orthogonal eigenvectors.

  • Eigenvalues are not unique to a matrix.

    If matrix \(\bf{B}\) relates to matrix \(\bf{A}\) by \(\mathbf{B} = \mathbf{M\,A\,M}^{-1}\), where \(\bf{M}\) is a matrix, then \(\bf{A}\) and \(\bf{B}\) are similar and have the same eigenvalues. The proof of existence and requirements for similar matrices is given in Similarity Transformations. Similar matrices are an important part of how eigenvalues are calculated. A similar upper triangular matrix is found with the eigenvalues on the diagonal through a sequence of matrix transformations.

  • Two eigenvalue properties are corollary results of the Schur Triangularization Theorem in (Schur Decomposition), which states that there is a factoring of any square matrix of the form \(\mathbf{A} = \mathbf{Q\, T\, Q}^H\), where \(\bf{Q}\) and its Hermitian transpose are unitary, and \(\bf{T}\) is upper triangular with the eigenvalues on the diagonal. The Schur decomposition is a preliminary result in calculating the eigenvalues and eigenvectors. The columns of \(\bf{Q}\) are called the Schur vectors and, by the spectral decomposition theorem (Spectral Decomposition Theorem), are the eigenvectors of \(\bf{A}\) when \(\bf{A}\) is Hermitian symmetric (\(\mathbf{A} = \mathbf{A}^H\)). The Schur decomposition gives us a couple of interesting results. Corollaries to the Schur Triangularization Theorem show that the trace and determinant of \(\bf{A}\) are the same as those of the upper triangular \(\bf{T}\). Thus, we have the following two eigenvalue properties.

    • The trace of a matrix (sum of the diagonal elements) is the sum of the eigenvalues. A proof is given in Sum of Eigenvalues.

    • The determinant of a matrix is the product of the eigenvalues. A proof is given in Product of Eigenvalues.

  • The eig function:

    • MATLAB returns the eigenvectors as unit vectors.

    • When no returned variables are saved from eig, it returns the eigenvalues.

    • The eig function will return both the eigenvectors (X) and the eigenvalues (L) as matrices.

      >> [X, L] = eig(A)
      

      We use L for the capital Greek letter Lambda, \(\Lambda\).

    • The columns of X are the eigenvectors.

    • The eigenvalues are on the diagonal of L. To get a vector of the eigenvalues, use l = diag(L).