Categories: Mathematics, Stochastic analysis.

# Dynkin’s formula

Given an Itō diffusion $X_t$ with a time-independent drift $f$ and intensity $g$ such that the diffusion uniquely exists on the $t$-axis. We define the infinitesimal generator $\hat{A}$ as an operator with the following action on a given function $h(x)$, where $\mathbf{E}$ is a conditional expectation:

\begin{aligned} \boxed{ \hat{A}\{h(X_0)\} \equiv \lim_{t \to 0^+} \bigg[ \frac{1}{t} \mathbf{E}\Big[ h(X_t) - h(X_0) \Big| X_0 \Big] \bigg] } \end{aligned}

Which only makes sense for $h$ where this limit exists. The assumption that $X_t$ does not have any explicit time-dependence means that $X_0$ need not be the true initial condition; it can also be the state $X_s$ at any $s$ infinitesimally smaller than $t$.

Conveniently, for a sufficiently well-behaved $h$, the generator $\hat{A}$ is identical to the Kolmogorov operator $\hat{L}$ found in the backward Kolmogorov equation:

\begin{aligned} \boxed{ \hat{A}\{h(x)\} = \hat{L}\{h(x)\} } \end{aligned}

We define a new process $Y_t \equiv h(X_t)$, and then apply Itō’s lemma, leading to:

\begin{aligned} \dd{Y_t} &= \bigg( \pdv{h}{x} f(X_t) + \frac{1}{2} \pdvn{2}{h}{x} g^2(X_t) \bigg) \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} \\ &= \hat{L}\{h(X_t)\} \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} \end{aligned}

Where we have recognized the definition of $\hat{L}$. Integrating the above equation yields:

\begin{aligned} Y_t = Y_0 + \int_0^t \hat{L}\{h(X_s)\} \dd{s} + \int_0^\tau \pdv{h}{x} g(X_s) \dd{B_s} \end{aligned}

As always, the latter Itō integral is a martingale, so it vanishes when we take the expectation conditioned on the “initial” state $X_0$, leaving:

\begin{aligned} \mathbf{E}[Y_t | X_0] = Y_0 + \mathbf{E}\bigg[ \int_0^t \hat{L}\{h(X_s)\} \dd{s} \bigg| X_0 \bigg] \end{aligned}

For suffiently small $t$, the integral can be replaced by its first-order approximation:

\begin{aligned} \mathbf{E}[Y_t | X_0] \approx Y_0 + \hat{L}\{h(X_0)\} \: t \end{aligned}

Rearranging this gives the following, to be understood in the limit $t \to 0^+$:

\begin{aligned} \hat{L}\{h(X_0)\} \approx \frac{1}{t} \mathbf{E}[Y_t - Y_0| X_0] \end{aligned}

The general definition of resembles that of a classical derivative, and indeed, the generator $\hat{A}$ can be thought of as a differential operator. In that case, we would like an analogue of the classical fundamental theorem of calculus to relate it to integration.

Such an analogue is provided by Dynkin’s formula: for a stopping time $\tau$ with a finite expected value $\mathbf{E}[\tau|X_0] < \infty$, it states that:

\begin{aligned} \boxed{ \mathbf{E}\big[ h(X_\tau) | X_0 \big] = h(X_0) + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg] } \end{aligned}

The proof is similar to the one above. Define $Y_t = h(X_t)$ and use Itō’s lemma:

\begin{aligned} \dd{Y_t} &= \bigg( \pdv{h}{x} f(X_t) + \frac{1}{2} \pdvn{2}{h}{x} g^2(X_t) \bigg) \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} \\ &= \hat{L} \{h(X_t)\} \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} \end{aligned}

And then integrate this from $t = 0$ to the provided stopping time $t = \tau$:

\begin{aligned} Y_\tau = Y_0 + \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} + \int_0^\tau \pdv{h}{x} g(X_t) \dd{B_t} \end{aligned}

All Itō integrals are martingales, so the latter integral’s conditional expectation is zero for the “initial” condition $X_0$. The rest of the above equality is also a martingale:

\begin{aligned} 0 = \mathbf{E}\bigg[ Y_\tau - Y_0 - \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg] \end{aligned}

Isolating this equation for $\mathbf{E}[Y_\tau \!\mid\! X_0]$ then gives Dynkin’s formula.

A common application of Dynkin’s formula is predicting when the stopping time $\tau$ occurs, and in what state $X_\tau$ this happens. Consider an example: for a region $\Omega$ of state space with $X_0 \in \Omega$, we define the exit time $\tau \equiv \inf\{ t : X_t \notin \Omega \}$, provided that $\mathbf{E}[\tau | X_0] < \infty$.

To get information about when and where $X_t$ exits $\Omega$, we define the general reward $\Gamma$ as follows, consisting of a running reward $R$ for $X_t$ inside $\Omega$, and a terminal reward $T$ on the boundary $\partial \Omega$ where we stop at $X_\tau$:

\begin{aligned} \Gamma = \int_0^\tau R(X_t) \dd{t} + \: T(X_\tau) \end{aligned}

For example, for $R = 1$ and $T = 0$, this becomes $\Gamma = \tau$, and if $R = 0$, then $T(X_\tau)$ can tell us the exit point. Let us now define $h(X_0) = \mathbf{E}[\Gamma | X_0]$, and apply Dynkin’s formula:

\begin{aligned} \mathbf{E}\big[ h(X_\tau) | X_0 \big] &= \mathbf{E}\big[ \Gamma \big| X_0 \big] + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg] \\ &= \mathbf{E}\big[ T(X_\tau) | X_0 \big] + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} + R(X_t) \dd{t} \bigg| X_0 \bigg] \end{aligned}

The two leftmost terms depend on the exit point $X_\tau$, but not directly on $X_t$ for $t < \tau$, while the rightmost depends on the whole trajectory $X_t$. Therefore, the above formula is fulfilled if $h(x)$ satisfies the following equation and boundary conditions:

\begin{aligned} \boxed{ \begin{cases} \hat{L}\{h(x)\} + R(x) = 0 & \mathrm{for}\; x \in \Omega \\ h(x) = T(x) & \mathrm{for}\; x \notin \Omega \end{cases} } \end{aligned}

In other words, we have just turned a difficult question about a stochastic trajectory $X_t$ into a classical differential boundary value problem for $h(x)$.

## References

1. U.H. Thygesen, Lecture notes on diffusions and stochastic differential equations, 2021, Polyteknisk Kompendie.