Stationary distribution of a random walk.

For long random walks on a markov chain, the probability of ending at a particular node is the same, regardless of wher you started.
This theorem is true for markov chains with strongly connected graphs.

Transition probability matrix (here transpose of weighted adjacency matrix)

Each column forms the probability of transitioning from state to some other state in one step of the random walk.
Let be a basis vector “the random walk at vertex ” → gives the a vector whose coordinate is the probability that the random walk would be at vertex after one more step in the random walk.
→ If a random walk is in state with probability , then the entry of is the probability that after one more step in the random walk you get to vertex .
→ The stationary distribution is a probability distribution such that , in other words, is an eigenvector of with eigenvalue .

Convergence of the stationary distribution on a markov chain

Let be a strongly connected graph with associated edge probabilities forming a Markov chain. For a probability vector , define for all , and let be the long-term average . Then:
→ There is a unique probability vector with .
→ For all , the limit .

Proof: Since is a probability vector we just want to show that as . Indeed, we can expand this quantity as

But are unit vectors (length 1), so their difference is at most 2, meaning . Now it’s clear that this does not depend on . For uniqueness we will cop out and appeal to the Perron-Frobenius theorem that says any matrix of this form has a unique such (normalized) eigenvector.