Eigenvectors are vectors who’s span is unchanged by a linear transformation.

Eigenvectors only get stretched/squished (scaled) or are left unchanged.
The eigenvalue is the value by which the vector is scaled.
You can think of it as the axis of rotation.
For a true rotation however, the eigenvalue needs to be , as rotations don’t scale vectors (we call those transformation matrices unitary)

Applying the transformation matrix to the eigenvector is equivalent to simply scaling it by its eigenvalue :

We want to figure out the eigenvector and the eigenvalue :
For that we make both sides look like matrix-vector multiplication, by multiplying with the identity.

We now want the nonzero solution for :

The product of a nonzero vector with a nonzero matrix can only be zero if the determinant is zero, so we are looking for a that satisfies:
Meaning we tweak lambda in such a way, that the resulting transformation squishes space to a lower dimension:

If , then the transformation changes the space in a way that it collapses to a lower dimension (for the case of it may be a plane, line or point: anything with 0 volume).
center
center
If the determinant is zero, the matrix is not invertible.

Link to original

We can now find the eigenvector , which stays on its span, scaled by , by solving the equation .

Not every transformation has a real eigenvector.

A rotation e.g.

only has the imaginary eigenvectors and :
The axis of rotation is around the complex plane, at the origin.
So multiplying with usually means some kind of rotation. This is also the case within the complex plane (the angles get added together).

A shear like only has a single eigenvalue.

The eigenspace is 1-dimensional … a single eigenline:
center

In a diagonal matrix, all vectors are eigenvectors, and the diagonal entries are the eigenvalues.

Diagonal matrices only scale what they are being multiplied with, since the basis vectors are also eigenvectors. For diagonal matrices, every vector is an eigenvector.

The matrix scales every vector by 2. Every vector is an eigenvector with .

Diagonal matrices have nice computational properties when taking large powers.

Link to original

Diagonalization: Performing a transformation from the perspective of the eigenbasis.

We can exploit the efficiency of diagonal matrices for taking large powers if we convert a transformation matrix into a diagonal one by multiplying it with the change-of-basis matrix and its inverse:

“First translate from the language of the eigenbasis to our basis, then apply , then translate back. The resulting matrix will represent the same transformation as , just in the language of the eigenbasis.”

If consists of the eigenvectors of , we are guaranteed to have a diagonal matrix with eigenvalues down the diagonal. → This is because we are then working in a coordinate system where the basis vectors get scaled by the transformation!

Then we can compute like so:

Not every has enough eigenvectors to form proper eigenspace — the eigenspace needs to have the same dimension as the original space (rank ) – so this trick is not always possible, like for a shear.
Symmetric matrices are always diagonalizable.

Link to original

Eigenvector puzzle

center
3b1b’s solution: Eigenvalue puzzle solution.pdf

Highlights from the solution / my take:

The terms form the fibonacci sequence:

The eigenvalues of were computed by:

is the golden ratio , and is .

(after calculating the eigenvectors …)

Converting to the eigenbasis:

Now we can compute:

…instead of tedious matrix multiplication.

To translate back into the original cordinate system:

Okay I’m not typing this shit, looks sick tho (look at the solution pdf for the equation).
It’s a really interesting / funny way of computing fibonacci.
(And ig probably the most efficient? As you don’t need any recursion)

References

vector

3b1b: Eigenvectors and eigenvalues | Chapter 14, Essence of linear algebra

https://www.khanacademy.org/math/linear-algebra/alternate-bases/eigen-everything/v/linear-algebra-introduction-to-eigenvalues-and-eigenvectors