Eigenvectors are a really interesting concept with a wide variety of applications. I originally learnt about them while studying Physics; then a few years ago they came up again in the context of Principle Component Analysis (PCA), a Data Science technique.
In my experience, it's easy to miss the fundamental concept behind eigenvectors because they are written about so formally. (You might have found the same if you clicked on either of the two links above 👆). And so here I wanted to capture my own limited mental model; focusing on an intuition for what's happening, rather than anything formal.
What is a matrix?
Here is a (square, two-dimensional) matrix, :
A matrix can be thought of as a transformation on vectors in space. That's because, given a vector :
It can be transformed by applying some matrix to it:
The new vector is called the dot product of the matrix and the original vector. And here's what the transformation process looks like:
What is an eigenvector?
To start answering that, first of all let's imagine applying this matrix to every possible vector in a space. Here's one example of what that might look like:
Hopefully, looking at this, it's noticeable that in some directions the vectors are stretched (maybe even inverted), but not skewed. I've marked those directions with blue arrows:
These blue arrows are the directions of the eigenvectors of the matrix.
A matrix can be thought of as similar to a lens or a prism; looking through it distorts space. If you imagine letting a lens rotate in front of you, then there will be certain directions in which you see the world as stretched, but not skewed. Those directions are a property of the lens, and we call them eigenvectors.
I purposefully haven't mentioned how to calculate an eigenvector in practice, or why it would be useful to do so. But I hope that this intuitive idea of a lens might help keep a picture in your head if you're ever working with eigenvectors in the future.