Given a stochastic process on a filtered probability space , it is said to be a Markov process if it satisfies the following requirements:
- is -adapted, meaning that the current and all past values of can be reconstructed from the filtration .
- For some function , the conditional expectation , i.e. at time , the expectation of depends only on the current . Note that must be bounded and Borel-measurable, meaning .
This last condition is called the Markov property, and demands that the future of does not depend on the past, but only on the present .
If both and are taken to be discrete, then is known as a Markov chain. This brings us to the concept of the transition probability , which describes the probability that will be in a given set , if we know that currently .
If and are continuous, we can often (but not always) express using a transition density , which gives the probability density that the initial condition will evolve into the terminal condition . Then the transition probability can be calculated like so, where is a given Borel set (see -algebra):
A prime examples of a continuous Markov process is the Wiener process. Note that this is also a martingale: often, a Markov process happens to be a martingale, or vice versa. However, those concepts are not to be confused: the Markov property does not specify what the expected future must be, and the martingale property says nothing about the history-dependence.
- U.H. Thygesen, Lecture notes on diffusions and stochastic differential equations, 2021, Polyteknisk Kompendie.