Categories: Mathematics, Stochastic analysis.

Markov process

Given a stochastic process {Xt:t0}\{X_t : t \ge 0\} on a filtered probability space (Ω,F,{Ft},P)(\Omega, \mathcal{F}, \{\mathcal{F}_t\}, P), it is said to be a Markov process if it satisfies the following requirements:

  1. XtX_t is Ft\mathcal{F}_t-adapted, meaning that the current and all past values of XtX_t can be reconstructed from the filtration Ft\mathcal{F}_t.
  2. For some function h(x)h(x), the conditional expectation E[h(Xt)Fs]=E[h(Xt)Xs]\mathbf{E}[h(X_t) | \mathcal{F}_s] = \mathbf{E}[h(X_t) | X_s], i.e. at time sts \le t, the expectation of h(Xt)h(X_t) depends only on the current XsX_s. Note that hh must be bounded and Borel-measurable, meaning σ(h(Xt))Ft\sigma(h(X_t)) \subseteq \mathcal{F}_t.

This last condition is called the Markov property, and demands that the future of XtX_t does not depend on the past, but only on the present XsX_s.

If both tt and XtX_t are taken to be discrete, then XtX_t is known as a Markov chain. This brings us to the concept of the transition probability P(XtAXs=x)P(X_t \in A | X_s = x), which describes the probability that XtX_t will be in a given set AA, if we know that currently Xs=xX_s = x.

If tt and XtX_t are continuous, we can often (but not always) express PP using a transition density p(s,x;t,y)p(s, x; t, y), which gives the probability density that the initial condition Xs=xX_s = x will evolve into the terminal condition Xt=yX_t = y. Then the transition probability PP can be calculated like so, where BB is a given Borel set (see σ\sigma-algebra):

P(XtBXs=x)=Bp(s,x;t,y)dy\begin{aligned} P(X_t \in B | X_s = x) = \int_B p(s, x; t, y) \dd{y} \end{aligned}

A prime examples of a continuous Markov process is the Wiener process. Note that this is also a martingale: often, a Markov process happens to be a martingale, or vice versa. However, those concepts are not to be confused: the Markov property does not specify what the expected future must be, and the martingale property says nothing about the history-dependence.

References

  1. U.H. Thygesen, Lecture notes on diffusions and stochastic differential equations, 2021, Polyteknisk Kompendie.