This article is about Kolmogorov's criterion in the study of Markov chains. For Kolmogorov's criterion in the study of norms on topological vector spaces, see
Kolmogorov's normability criterion.
In probability theory, Kolmogorov's criterion, named after Andrey Kolmogorov, is a theorem giving a necessary and sufficient condition for a Markov chain or continuous-time Markov chain to be stochastically identical to its time-reversed version.
Discrete-time Markov chains[edit]
The theorem states that an irreducible, positive recurrent, aperiodic Markov chain with transition matrix P is reversible if and only if its stationary Markov chain satisfies[1]
![{\displaystyle p_{j_{1}j_{2}}p_{j_{2}j_{3}}\cdots p_{j_{n-1}j_{n}}p_{j_{n}j_{1}}=p_{j_{1}j_{n}}p_{j_{n}j_{n-1}}\cdots p_{j_{3}j_{2}}p_{j_{2}j_{1}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b99fe41a29a5429239588b105f58af7c16fcf319)
for all finite sequences of states
![{\displaystyle j_{1},j_{2},\ldots ,j_{n}\in S.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/26fa5f7f3815a77d45a7177cb99a6e35b1992483)
Here pij are components of the transition matrix P, and S is the state space of the chain.
Example[edit]
Consider this figure depicting a section of a Markov chain with states i, j, k and l and the corresponding transition probabilities. Here Kolmogorov's criterion implies that the product of probabilities when traversing through any closed loop must be equal, so the product around the loop i to j to l to k returning to i must be equal to the loop the other way round,
![{\displaystyle p_{ij}p_{jl}p_{lk}p_{ki}=p_{ik}p_{kl}p_{lj}p_{ji}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/28efb323479eab3e35772d63065a8638ab97ab6e)
Let
be the Markov chain and denote by
its stationary distribution (such exists since the chain is positive recurrent).
If the chain is reversible, the equality follows from the relation
.
Now assume that the equality is fulfilled. Fix states
and
. Then
![{\displaystyle {\text{P}}(X_{n+1}=t,X_{n}=i_{n},\ldots ,X_{0}=s|X_{0}=s)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7e1d92476b419644bbc2ae48bdbe98b7a6022ecd)
![{\displaystyle =p_{si_{1}}p_{i_{1}i_{2}}\cdots p_{i_{n}t}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ccba7719030f0158efb01346ea5c2d51db8b53c2)
![{\displaystyle ={\frac {p_{st}}{p_{ts}}}p_{ti_{n}}p_{i_{n}i_{n-1}}\cdots p_{i_{1}s}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4b21c1419dc084ad73c26e3ec048cf61d36788f8)
![{\displaystyle ={\frac {p_{st}}{p_{ts}}}{\text{P}}(X_{n+1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7cedbab9dcda7ea8249efd0b8667b40ae9ae67e4)
![{\displaystyle =s,X_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d630dd9577e173df618c2f9cbc8956e20e6be8b3)
.
Now sum both sides of the last equality for all possible ordered choices of
states
. Thus we obtain
so
. Send
to
on the left side of the last. From the properties of the chain follows that
, hence
which shows that the chain is reversible.
Continuous-time Markov chains[edit]
The theorem states that a continuous-time Markov chain with transition rate matrix Q is, under any invariant probability vector, reversible if and only if its transition probabilities satisfy[1]
![{\displaystyle q_{j_{1}j_{2}}q_{j_{2}j_{3}}\cdots q_{j_{n-1}j_{n}}q_{j_{n}j_{1}}=q_{j_{1}j_{n}}q_{j_{n}j_{n-1}}\cdots q_{j_{3}j_{2}}q_{j_{2}j_{1}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7a8a8427fe39aef4faf0a5b4b2b93a43ea44421f)
for all finite sequences of states
![{\displaystyle j_{1},j_{2},\ldots ,j_{n}\in S.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/26fa5f7f3815a77d45a7177cb99a6e35b1992483)
The proof for continuous-time Markov chains follows in the same way as the proof for discrete-time Markov chains.
References[edit]