summaryrefslogtreecommitdiff
path: root/source/know/concept/markov-process
diff options
context:
space:
mode:
Diffstat (limited to 'source/know/concept/markov-process')
-rw-r--r--source/know/concept/markov-process/index.md46
1 files changed, 23 insertions, 23 deletions
diff --git a/source/know/concept/markov-process/index.md b/source/know/concept/markov-process/index.md
index fd6b076..c938866 100644
--- a/source/know/concept/markov-process/index.md
+++ b/source/know/concept/markov-process/index.md
@@ -9,37 +9,37 @@ layout: "concept"
---
Given a [stochastic process](/know/concept/stochastic-process/)
-$\{X_t : t \ge 0\}$ on a filtered probability space
-$(\Omega, \mathcal{F}, \{\mathcal{F}_t\}, P)$,
+$$\{X_t : t \ge 0\}$$ on a filtered probability space
+$$(\Omega, \mathcal{F}, \{\mathcal{F}_t\}, P)$$,
it is said to be a **Markov process**
if it satisfies the following requirements:
-1. $X_t$ is $\mathcal{F}_t$-adapted,
- meaning that the current and all past values of $X_t$
- can be reconstructed from the filtration $\mathcal{F}_t$.
-2. For some function $h(x)$,
+1. $$X_t$$ is $$\mathcal{F}_t$$-adapted,
+ meaning that the current and all past values of $$X_t$$
+ can be reconstructed from the filtration $$\mathcal{F}_t$$.
+2. For some function $$h(x)$$,
the [conditional expectation](/know/concept/conditional-expectation/)
- $\mathbf{E}[h(X_t) | \mathcal{F}_s] = \mathbf{E}[h(X_t) | X_s]$,
- i.e. at time $s \le t$, the expectation of $h(X_t)$ depends only on the current $X_s$.
- Note that $h$ must be bounded and *Borel-measurable*,
- meaning $\sigma(h(X_t)) \subseteq \mathcal{F}_t$.
+ $$\mathbf{E}[h(X_t) | \mathcal{F}_s] = \mathbf{E}[h(X_t) | X_s]$$,
+ i.e. at time $$s \le t$$, the expectation of $$h(X_t)$$ depends only on the current $$X_s$$.
+ Note that $$h$$ must be bounded and *Borel-measurable*,
+ meaning $$\sigma(h(X_t)) \subseteq \mathcal{F}_t$$.
This last condition is called the **Markov property**,
-and demands that the future of $X_t$ does not depend on the past,
-but only on the present $X_s$.
+and demands that the future of $$X_t$$ does not depend on the past,
+but only on the present $$X_s$$.
-If both $t$ and $X_t$ are taken to be discrete,
-then $X_t$ is known as a **Markov chain**.
+If both $$t$$ and $$X_t$$ are taken to be discrete,
+then $$X_t$$ is known as a **Markov chain**.
This brings us to the concept of the **transition probability**
-$P(X_t \in A | X_s = x)$, which describes the probability that
-$X_t$ will be in a given set $A$, if we know that currently $X_s = x$.
-
-If $t$ and $X_t$ are continuous, we can often (but not always) express $P$
-using a **transition density** $p(s, x; t, y)$,
-which gives the probability density that the initial condition $X_s = x$
-will evolve into the terminal condition $X_t = y$.
-Then the transition probability $P$ can be calculated like so,
-where $B$ is a given Borel set (see [$\sigma$-algebra](/know/concept/sigma-algebra/)):
+$$P(X_t \in A | X_s = x)$$, which describes the probability that
+$$X_t$$ will be in a given set $$A$$, if we know that currently $$X_s = x$$.
+
+If $$t$$ and $$X_t$$ are continuous, we can often (but not always) express $$P$$
+using a **transition density** $$p(s, x; t, y)$$,
+which gives the probability density that the initial condition $$X_s = x$$
+will evolve into the terminal condition $$X_t = y$$.
+Then the transition probability $$P$$ can be calculated like so,
+where $$B$$ is a given Borel set (see [$$\sigma$$-algebra](/know/concept/sigma-algebra/)):
$$\begin{aligned}
P(X_t \in B | X_s = x)