Link to originalDiagonalization: Performing a transformation from the perspective of the eigenbasis.
We can exploit the efficiency of diagonal matrices for taking large powers if we convert a transformation matrix into a diagonal one by multiplying it with the change-of-basis matrix and its inverse:
“First translate from the language of the eigenbasis to our basis, then apply , then translate back. The resulting matrix will represent the same transformation as , just in the language of the eigenbasis.”
If consists of the eigenvectors of , we are guaranteed to have a diagonal matrix with eigenvalues down the diagonal. → This is because we are then working in a coordinate system where the basis vectors get scaled by the transformation!
Then we can compute like so:
Not every has enough eigenvectors to form proper eigenspace — the eigenspace needs to have the same dimension as the original space (rank ) – so this trick is not always possible, like for a shear.
Symmetric matrices are always diagonalizable.
Eigen decomposition
For a square matrix , if has linearly independent eigenvectors, it can be decomposed as:
where are the linearly independent eigenvectors of , and are the corresponding eigenvalues. is the matrix of eigenvectors, and is a diagonal matrix of eigenvalues.
Eigendecomposition is a type of matrix factorization
Geometric Interpretation
The eigendecomposition reveals how transforms space:
changes to the eigenvector basis (“rotate to ”)
scales along each eigenvector direction by
transforms back to the original basis (“rotate back to ”)
Not all square matrices have an eigendecomposition!
A matrix must have linearly independent eigenvectors (complete eigenbasis; rank ) to be diagonalizable. Matrices that don’t satisfy this are called defective.
Link to originalSpectral theorem
Every symmetric matrix is diagonalizeable.
Let be a symmetric matrix. Then, all eigenvalues of are real, and there exists an orthonormal basis of consisting of eigenvectors of (aka eigenbasis).
Then:… orthogonal matrix 12 () whose columns are the eigenvectors of
→ For symmetric matrices, this is a pure scaling in the orthogonal directions of the eigenvectors or principal axes - the natural directions along which acts purely by scaling. Each eigenvector direction is scaled by its eigenvalue without any rotation or shearing.
→ Eigendecomposition and SVD coincide for symmetric matrices.
I think some of the nuances here are incorrect.