summaryrefslogtreecommitdiff
path: root/source/know/concept/kolmogorov-equations
diff options
context:
space:
mode:
authorPrefetch2022-10-20 18:25:31 +0200
committerPrefetch2022-10-20 18:25:31 +0200
commit16555851b6514a736c5c9d8e73de7da7fc9b6288 (patch)
tree76b8bfd30f8941d0d85365990bcdbc5d0643cabc /source/know/concept/kolmogorov-equations
parente5b9bce79b68a68ddd2e51daa16d2fea73b84fdb (diff)
Migrate from 'jekyll-katex' to 'kramdown-math-sskatex'
Diffstat (limited to 'source/know/concept/kolmogorov-equations')
-rw-r--r--source/know/concept/kolmogorov-equations/index.md96
1 files changed, 48 insertions, 48 deletions
diff --git a/source/know/concept/kolmogorov-equations/index.md b/source/know/concept/kolmogorov-equations/index.md
index 47820ee..1ca2df6 100644
--- a/source/know/concept/kolmogorov-equations/index.md
+++ b/source/know/concept/kolmogorov-equations/index.md
@@ -10,7 +10,7 @@ layout: "concept"
---
Consider the following general [Itō diffusion](/know/concept/ito-calculus/)
-$X_t \in \mathbb{R}$, which is assumed to satisfy
+$$X_t \in \mathbb{R}$$, which is assumed to satisfy
the conditions for unique existence on the entire time axis:
$$\begin{aligned}
@@ -18,14 +18,14 @@ $$\begin{aligned}
= f(X_t, t) \dd{t} + g(X_t, t) \dd{B_t}
\end{aligned}$$
-Let $\mathcal{F}_t$ be the filtration to which $X_t$ is adapted,
-then we define $Y_s$ as shown below,
+Let $$\mathcal{F}_t$$ be the filtration to which $$X_t$$ is adapted,
+then we define $$Y_s$$ as shown below,
namely as the [conditional expectation](/know/concept/conditional-expectation/)
-of $h(X_t)$, for an arbitrary bounded function $h(x)$,
-given the information $\mathcal{F}_s$ available at time $s \le t$.
-Because $X_t$ is a [Markov process](/know/concept/markov-process/),
-$Y_s$ must be $X_s$-measurable,
-so it is a function $k$ of $X_s$ and $s$:
+of $$h(X_t)$$, for an arbitrary bounded function $$h(x)$$,
+given the information $$\mathcal{F}_s$$ available at time $$s \le t$$.
+Because $$X_t$$ is a [Markov process](/know/concept/markov-process/),
+$$Y_s$$ must be $$X_s$$-measurable,
+so it is a function $$k$$ of $$X_s$$ and $$s$$:
$$\begin{aligned}
Y_s
@@ -34,8 +34,8 @@ $$\begin{aligned}
= k(X_s, s)
\end{aligned}$$
-Consequently, we can apply Itō's lemma to find $\dd{Y_s}$
-in terms of $k$, $f$ and $g$:
+Consequently, we can apply Itō's lemma to find $$\dd{Y_s}$$
+in terms of $$k$$, $$f$$ and $$g$$:
$$\begin{aligned}
\dd{Y_s}
@@ -44,19 +44,19 @@ $$\begin{aligned}
&= \bigg( \pdv{k}{s} + \hat{L} k \bigg) \dd{s} + \pdv{k}{x} g \dd{B_s}
\end{aligned}$$
-Where we have defined the linear operator $\hat{L}$
-to have the following action on $k$:
+Where we have defined the linear operator $$\hat{L}$$
+to have the following action on $$k$$:
$$\begin{aligned}
\hat{L} k
\equiv \pdv{k}{x} f + \frac{1}{2} \pdvn{2}{k}{x} g^2
\end{aligned}$$
-At this point, we need to realize that $Y_s$ is
-a [martingale](/know/concept/martingale/) with respect to $\mathcal{F}_s$,
-since $Y_s$ is $\mathcal{F}_s$-adapted and finite,
+At this point, we need to realize that $$Y_s$$ is
+a [martingale](/know/concept/martingale/) with respect to $$\mathcal{F}_s$$,
+since $$Y_s$$ is $$\mathcal{F}_s$$-adapted and finite,
and it satisfies the martingale property,
-for $r \le s \le t$:
+for $$r \le s \le t$$:
$$\begin{aligned}
\mathbf{E}[Y_s | \mathcal{F}_r]
@@ -66,20 +66,20 @@ $$\begin{aligned}
\end{aligned}$$
Where we used the tower property of conditional expectations,
-because $\mathcal{F}_r \subset \mathcal{F}_s$.
+because $$\mathcal{F}_r \subset \mathcal{F}_s$$.
However, an Itō diffusion can only be a martingale
-if its drift term (the one containing $\dd{s}$) vanishes,
-so, looking at $\dd{Y_s}$, we must demand that:
+if its drift term (the one containing $$\dd{s}$$) vanishes,
+so, looking at $$\dd{Y_s}$$, we must demand that:
$$\begin{aligned}
\pdv{k}{s} + \hat{L} k
= 0
\end{aligned}$$
-Because $k(X_s, s)$ is a Markov process,
-we can write it with a transition density $p(s, X_s; t, X_t)$,
-where in this case $s$ and $X_s$ are given initial conditions,
-$t$ is a parameter, and the terminal state $X_t$ is a random variable.
+Because $$k(X_s, s)$$ is a Markov process,
+we can write it with a transition density $$p(s, X_s; t, X_t)$$,
+where in this case $$s$$ and $$X_s$$ are given initial conditions,
+$$t$$ is a parameter, and the terminal state $$X_t$$ is a random variable.
We thus have:
$$\begin{aligned}
@@ -87,26 +87,26 @@ $$\begin{aligned}
= \int_{-\infty}^\infty p(s, x; t, y) \: h(y) \dd{y}
\end{aligned}$$
-We insert this into the equation that we just derived for $k$, yielding:
+We insert this into the equation that we just derived for $$k$$, yielding:
$$\begin{aligned}
0
= \int_{-\infty}^\infty \!\! \Big( \pdv{}{s}p(s, x; t, y) + \hat{L} p(s, x; t, y) \Big) h(y) \dd{y}
\end{aligned}$$
-Because $h$ is arbitrary, and this must be satisfied for all $h$,
-the transition density $p$ fulfills:
+Because $$h$$ is arbitrary, and this must be satisfied for all $$h$$,
+the transition density $$p$$ fulfills:
$$\begin{aligned}
0
= \pdv{}{s}p(s, x; t, y) + \hat{L} p(s, x; t, y)
\end{aligned}$$
-Here, $t$ is a known parameter and $y$ is a "known" integration variable,
-leaving only $s$ and $x$ as free variables for us to choose.
-We therefore define the **likelihood function** $\psi(s, x)$,
-which gives the likelihood of an initial condition $(s, x)$
-given that the terminal condition is $(t, y)$:
+Here, $$t$$ is a known parameter and $$y$$ is a "known" integration variable,
+leaving only $$s$$ and $$x$$ as free variables for us to choose.
+We therefore define the **likelihood function** $$\psi(s, x)$$,
+which gives the likelihood of an initial condition $$(s, x)$$
+given that the terminal condition is $$(t, y)$$:
$$\begin{aligned}
\boxed{
@@ -116,7 +116,7 @@ $$\begin{aligned}
\end{aligned}$$
And from the above derivation,
-we conclude that $\psi$ satisfies the following PDE,
+we conclude that $$\psi$$ satisfies the following PDE,
known as the **backward Kolmogorov equation**:
$$\begin{aligned}
@@ -128,9 +128,9 @@ $$\begin{aligned}
\end{aligned}$$
Moving on, we can define the traditional
-**probability density function** $\phi(t, y)$ from the transition density $p$,
-by fixing the initial $(s, x)$
-and leaving the terminal $(t, y)$ free:
+**probability density function** $$\phi(t, y)$$ from the transition density $$p$$,
+by fixing the initial $$(s, x)$$
+and leaving the terminal $$(t, y)$$ free:
$$\begin{aligned}
\boxed{
@@ -139,10 +139,10 @@ $$\begin{aligned}
}
\end{aligned}$$
-With this in mind, for $(s, x) = (0, X_0)$,
-the unconditional expectation $\mathbf{E}[Y_t]$
+With this in mind, for $$(s, x) = (0, X_0)$$,
+the unconditional expectation $$\mathbf{E}[Y_t]$$
(i.e. the conditional expectation without information)
-will be constant in time, because $Y_t$ is a martingale:
+will be constant in time, because $$Y_t$$ is a martingale:
$$\begin{aligned}
\mathbf{E}[Y_t]
@@ -154,8 +154,8 @@ $$\begin{aligned}
This integral has the form of an inner product,
so we switch to [Dirac notation](/know/concept/dirac-notation/).
-We differentiate with respect to $t$,
-and use the backward equation $\ipdv{k}{t} + \hat{L} k = 0$:
+We differentiate with respect to $$t$$,
+and use the backward equation $$\ipdv{k}{t} + \hat{L} k = 0$$:
$$\begin{aligned}
0
@@ -165,11 +165,11 @@ $$\begin{aligned}
= \Inprod{k}{\pdv{\phi}{t} - \hat{L}{}^\dagger \phi}
\end{aligned}$$
-Where $\hat{L}{}^\dagger$ is by definition the adjoint operator of $\hat{L}$,
+Where $$\hat{L}{}^\dagger$$ is by definition the adjoint operator of $$\hat{L}$$,
which we calculate using partial integration,
-where all boundary terms vanish thanks to the *existence* of $X_t$;
-in other words, $X_t$ cannot reach infinity at any finite $t$,
-so the integrand must decay to zero for $|y| \to \infty$:
+where all boundary terms vanish thanks to the *existence* of $$X_t$$;
+in other words, $$X_t$$ cannot reach infinity at any finite $$t$$,
+so the integrand must decay to zero for $$|y| \to \infty$$:
$$\begin{aligned}
\Inprod{\hat{L} k}{\phi}
@@ -185,9 +185,9 @@ $$\begin{aligned}
= \Inprod{k}{\hat{L}{}^\dagger \phi}
\end{aligned}$$
-Since $k$ is arbitrary, and $\ipdv{\Inprod{k}{\phi}}{t} = 0$ for all $k$,
+Since $$k$$ is arbitrary, and $$\ipdv{\Inprod{k}{\phi}}{t} = 0$$ for all $$k$$,
we thus arrive at the **forward Kolmogorov equation**,
-describing the evolution of the probability density $\phi(t, y)$:
+describing the evolution of the probability density $$\phi(t, y)$$:
$$\begin{aligned}
\boxed{
@@ -199,7 +199,7 @@ $$\begin{aligned}
This can be rewritten in a way
that highlights the connection between Itō diffusions and physical diffusion,
-if we define the **diffusivity** $D$, **advection** $u$, and **probability flux** $J$:
+if we define the **diffusivity** $$D$$, **advection** $$u$$, and **probability flux** $$J$$:
$$\begin{aligned}
D
@@ -223,7 +223,7 @@ $$\begin{aligned}
}
\end{aligned}$$
-Note that if $u = 0$, then this reduces to
+Note that if $$u = 0$$, then this reduces to
[Fick's second law](/know/concept/ficks-laws/).
The backward Kolmogorov equation can also be rewritten analogously,
although it is less noteworthy: