1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
|
---
title: "Markov process"
firstLetter: "M"
publishDate: 2021-11-14
categories:
- Mathematics
date: 2021-11-13T21:05:21+01:00
draft: false
markup: pandoc
---
# Markov process
Given a [stochastic process](/know/concept/stochastic-process/)
$\{X_t : t \ge 0\}$ on a filtered probability space
$(\Omega, \mathcal{F}, \{\mathcal{F}_t\}, P)$,
it is said to be a **Markov process**
if it satisfies the following requirements:
1. $X_t$ is $\mathcal{F}_t$-adapted,
meaning that the current and all past values of $X_t$
can be reconstructed from the filtration $\mathcal{F}_t$.
2. For some function $h(x)$,
the [conditional expectation](/know/concept/conditional-expectation/)
$\mathbf{E}[h(X_t) | \mathcal{F}_s] = \mathbf{E}[h(X_t) | X_s]$,
i.e. at time $s \le t$, the expectation of $h(X_t)$ depends only on the current $X_s$.
Note that $h$ must be bounded and *Borel-measurable*,
meaning $\sigma(h(X_t)) \subseteq \mathcal{F}_t$.
This last condition is called the **Markov property**,
and demands that the future of $X_t$ does not depend on the past,
but only on the present $X_s$.
If both $t$ and $X_t$ are taken to be discrete,
then $X_t$ is known as a **Markov chain**.
This brings us to the concept of the **transition probability**
$P(X_t \in A | X_s = x)$, which describes the probability that
$X_t$ will be in a given set $A$, if we know that currently $X_s = x$.
If $t$ and $X_t$ are continuous, we can often (but not always) express $P$
using a **transition density** $p(s, x; t, y)$,
which gives the probability density that the initial condition $X_s = x$
will evolve into the terminal condition $X_t = y$.
Then the transition probability $P$ can be calculated like so,
where $B$ is a given Borel set (see [$\sigma$-algebra](/know/concept/sigma-algebra/)):
$$\begin{aligned}
P(X_t \in B | X_s = x)
= \int_B p(s, x; t, y) \dd{y}
\end{aligned}$$
A prime examples of a continuous Markov process is
the [Wiener process](/know/concept/wiener-process/).
Note that this is also a [martingale](/know/concept/martingale/):
often, a Markov process happens to be a martingale, or vice versa.
However, those concepts are not to be confused:
the Markov property does not specify *what* the expected future must be,
and the martingale property says nothing about the history-dependence.
## References
1. U.H. Thygesen,
*Lecture notes on diffusions and stochastic differential equations*,
2021, Polyteknisk Kompendie.
|