A diagonal matrix only has non-zero entries along its diagonal.

are the diagonal elements of the matrix.
We write or simply .

The inverse of a diagonal matrix is easy:

Link to original

Diagonal matrices have nice computational properties when taking large powers.

In a diagonal matrix, all vectors are eigenvectors, and the diagonal entries are the eigenvalues.

Diagonal matrices only scale what they are being multiplied with, since the basis vectors are also eigenvectors. For diagonal matrices, every vector is an eigenvector.

The matrix scales every vector by 2. Every vector is an eigenvector with .

Link to original

Eigendecomposition / Diagonalization

Performing a transformation from the perspective of the eigenbasis (… → only exists if eigenvectors span the space) can be much easier than in the original basis.
We can exploit the efficiency of diagonal matrices for taking large powers if we convert a transformation matrix into a diagonal one by multiplying it with the change-of-basis matrix and its inverse:

“First translate from the language of the eigenbasis to our basis, then apply , then translate back. The resulting matrix will represent the same transformation as , just in the language of the eigenbasis.”

If consists of the eigenvectors of , we are guaranteed to have a diagonal matrix with eigenvalues down the diagonal. → This is because we are then working in a coordinate system where the basis vectors get scaled by the transformation!

Then we can compute like so:

Not every has enough eigenvectors to form proper eigenspace — the eigenspace needs to have the same dimension as the original space (rank ) – so this trick is not always possible, like for a shear.
Symmetric matrices are always diagonalizable.

Link to original