diff options
author | Prefetch | 2021-11-28 17:15:39 +0100 |
---|---|---|
committer | Prefetch | 2021-11-28 17:15:39 +0100 |
commit | 61271b92a793dd837d8326c7064cebd0a3fcdb39 (patch) | |
tree | e49c7e017a9ce189806c34109aa2164138f95ac0 /content/know/concept/dynkins-formula | |
parent | eccfe8c4eb562ab7ddeddaf48f73e59c9dcdc284 (diff) |
Expand knowledge base
Diffstat (limited to 'content/know/concept/dynkins-formula')
-rw-r--r-- | content/know/concept/dynkins-formula/index.pdc | 199 |
1 files changed, 199 insertions, 0 deletions
diff --git a/content/know/concept/dynkins-formula/index.pdc b/content/know/concept/dynkins-formula/index.pdc new file mode 100644 index 0000000..a6aa2c4 --- /dev/null +++ b/content/know/concept/dynkins-formula/index.pdc @@ -0,0 +1,199 @@ +--- +title: "Dynkin's formula" +firstLetter: "D" +publishDate: 2021-11-28 +categories: +- Mathematics +- Stochastic analysis + +date: 2021-11-26T10:10:09+01:00 +draft: false +markup: pandoc +--- + +# Dynkin's formula + +Given an [Itō diffusion](/know/concept/ito-calculus/) $X_t$ +with a time-independent drift $f$ and intensity $g$ +such that the diffusion uniquely exists on the $t$-axis. +We define the **infinitesimal generator** $\hat{A}$ +as an operator with the following action on a given function $h(x)$, +where $\mathbf{E}$ is a +[conditional expectation](/know/concept/conditional-expectation/): + +$$\begin{aligned} + \boxed{ + \hat{A}\{h(X_0)\} + \equiv \lim_{t \to 0^+} \bigg[ \frac{1}{t} \mathbf{E}\Big[ h(X_t) - h(X_0) \Big| X_0 \Big] \bigg] + } +\end{aligned}$$ + +Which only makes sense for $h$ where this limit exists. +The assumption that $X_t$ does not have any explicit time-dependence +means that $X_0$ need not be the true initial condition; +it can also be the state $X_s$ at any $s$ infinitesimally smaller than $t$. + +Conveniently, for a sufficiently well-behaved $h$, +the generator $\hat{A}$ is identical to the Kolmogorov operator $\hat{L}$ +found in the [backward Kolmogorov equation](/know/concept/kolmogorov-equations/): + +$$\begin{aligned} + \boxed{ + \hat{A}\{h(x)\} + = \hat{L}\{h(x)\} + } +\end{aligned}$$ + +<div class="accordion"> +<input type="checkbox" id="proof-kolmogorov"/> +<label for="proof-kolmogorov">Proof</label> +<div class="hidden"> +<label for="proof-kolmogorov">Proof.</label> +We define a new process $Y_t \equiv h(X_t)$, and then apply Itō's lemma, leading to: + +$$\begin{aligned} + \dd{Y_t} + &= \bigg( \pdv{h}{x} f(X_t) + \frac{1}{2} \pdv[2]{h}{x} g^2(X_t) \bigg) \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} + \\ + &= \hat{L}\{h(X_t)\} \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} +\end{aligned}$$ + +Where we have recognized the definition of $\hat{L}$. +Integrating the above equation yields: + +$$\begin{aligned} + Y_t + = Y_0 + \int_0^t \hat{L}\{h(X_s)\} \dd{s} + \int_0^\tau \pdv{h}{x} g(X_s) \dd{B_s} +\end{aligned}$$ + +As always, the latter [Itō integral](/know/concept/ito-integral/) +is a [martingale](/know/concept/martingale/), so it vanishes +when we take the expectation conditioned on the "initial" state $X_0$, leaving: + +$$\begin{aligned} + \mathbf{E}[Y_t | X_0] + = Y_0 + \mathbf{E}\bigg[ \int_0^t \hat{L}\{h(X_s)\} \dd{s} \bigg| X_0 \bigg] +\end{aligned}$$ + +For suffiently small $t$, the integral can be replaced by its first-order approximation: + +$$\begin{aligned} + \mathbf{E}[Y_t | X_0] + \approx Y_0 + \hat{L}\{h(X_0)\} \: t +\end{aligned}$$ + +Rearranging this gives the following, +to be understood in the limit $t \to 0^+$: + +$$\begin{aligned} + \hat{L}\{h(X_0)\} + \approx \frac{1}{t} \mathbf{E}[Y_t - Y_0| X_0] +\end{aligned}$$ +</div> +</div> + +The general definition of resembles that of a classical derivative, +and indeed, the generator $\hat{A}$ can be thought of as a differential operator. +In that case, we would like an analogue of the classical +fundamental theorem of calculus to relate it to integration. + +Such an analogue is provided by **Dynkin's formula**: +for a stopping time $\tau$ with a finite expected value $\mathbf{E}[\tau|X_0] < \infty$, +it states that: + +$$\begin{aligned} + \boxed{ + \mathbf{E}\big[ h(X_\tau) | X_0 \big] + = h(X_0) + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg] + } +\end{aligned}$$ + +<div class="accordion"> +<input type="checkbox" id="proof-dynkin"/> +<label for="proof-dynkin">Proof</label> +<div class="hidden"> +<label for="proof-dynkin">Proof.</label> +The proof is similar to the one above. +Define $Y_t = h(X_t)$ and use Itō’s lemma: + +$$\begin{aligned} + \dd{Y_t} + &= \bigg( \pdv{h}{x} f(X_t) + \frac{1}{2} \pdv[2]{h}{x} g^2(X_t) \bigg) \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} + \\ + &= \hat{L} \{h(X_t)\} \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} +\end{aligned}$$ + +And then integrate this from $t = 0$ to the provided stopping time $t = \tau$: + +$$\begin{aligned} + Y_\tau + = Y_0 + \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} + \int_0^\tau \pdv{h}{x} g(X_t) \dd{B_t} +\end{aligned}$$ + +All [Itō integrals](/know/concept/ito-integral/) +are [martingales](/know/concept/martingale/), +so the latter integral's conditional expectation is zero for the "initial" condition $X_0$. +The rest of the above equality is also a martingale: + +$$\begin{aligned} + 0 + = \mathbf{E}\bigg[ Y_\tau - Y_0 - \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg] +\end{aligned}$$ + +Isolating this equation for $\mathbf{E}[Y_\tau | X_0]$ then gives Dynkin's formula. +</div> +</div> + +A common application of Dynkin's formula is predicting +when the stopping time $\tau$ occurs, and in what state $X_\tau$ this happens. +Consider an example: +for a region $\Omega$ of state space with $X_0 \in \Omega$, +we define the exit time $\tau \equiv \inf\{ t : X_t \notin \Omega \}$, +provided that $\mathbf{E}[\tau | X_0] < \infty$. + +To get information about when and where $X_t$ exits $\Omega$, +we define the *general reward* $\Gamma$ as follows, +consisting of a *running reward* $R$ for $X_t$ inside $\Omega$, +and a *terminal reward* $T$ on the boundary $\partial \Omega$ where we stop at $X_\tau$: + +$$\begin{aligned} + \Gamma + = \int_0^\tau R(X_t) \dd{t} + \: T(X_\tau) +\end{aligned}$$ + +For example, for $R = 1$ and $T = 0$, this becomes $\Gamma = \tau$, +and if $R = 0$, then $T(X_\tau)$ can tell us the exit point. +Let us now define $h(X_0) = \mathbf{E}[\Gamma | X_0]$, +and apply Dynkin's formula: + +$$\begin{aligned} + \mathbf{E}\big[ h(X_\tau) | X_0 \big] + &= \mathbf{E}\big[ \Gamma \big| X_0 \big] + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg] + \\ + &= \mathbf{E}\big[ T(X_\tau) | X_0 \big] + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} + R(X_t) \dd{t} \bigg| X_0 \bigg] +\end{aligned}$$ + +The two leftmost terms depend on the exit point $X_\tau$, +but not directly on $X_t$ for $t < \tau$, +while the rightmost depends on the whole trajectory $X_t$. +Therefore, the above formula is fulfilled +if $h(x)$ satisfies the following equation and boundary conditions: + +$$\begin{aligned} + \boxed{ + \begin{cases} + \hat{L}\{h(x)\} + R(x) = 0 & \mathrm{for}\; x \in \Omega \\ + h(x) = T(x) & \mathrm{for}\; x \notin \Omega + \end{cases} + } +\end{aligned}$$ + +In other words, we have just turned a difficult question about a stochastic trajectory $X_t$ +into a classical differential boundary value problem for $h(x)$. + + + +## References +1. U.H. Thygesen, + *Lecture notes on diffusions and stochastic differential equations*, + 2021, Polyteknisk Kompendie. |