Categories: Mathematics, Stochastic analysis.

Dynkin’s formula

Given an Itō diffusion XtX_t with a time-independent drift ff and intensity gg such that the diffusion uniquely exists on the tt-axis. We define the infinitesimal generator A^\hat{A} as an operator with the following action on a given function h(x)h(x), where E\mathbf{E} is a conditional expectation:

A^{h(X0)}limt0+[1tE[h(Xt)h(X0)X0]]\begin{aligned} \boxed{ \hat{A}\{h(X_0)\} \equiv \lim_{t \to 0^+} \bigg[ \frac{1}{t} \mathbf{E}\Big[ h(X_t) - h(X_0) \Big| X_0 \Big] \bigg] } \end{aligned}

Which only makes sense for hh where this limit exists. The assumption that XtX_t does not have any explicit time-dependence means that X0X_0 need not be the true initial condition; it can also be the state XsX_s at any ss infinitesimally smaller than tt.

Conveniently, for a sufficiently well-behaved hh, the generator A^\hat{A} is identical to the Kolmogorov operator L^\hat{L} found in the backward Kolmogorov equation:

A^{h(x)}=L^{h(x)}\begin{aligned} \boxed{ \hat{A}\{h(x)\} = \hat{L}\{h(x)\} } \end{aligned}

We define a new process Yth(Xt)Y_t \equiv h(X_t), and then apply Itō’s lemma, leading to:

dYt=(hxf(Xt)+122hx2g2(Xt))dt+hxg(Xt)dBt=L^{h(Xt)}dt+hxg(Xt)dBt\begin{aligned} \dd{Y_t} &= \bigg( \pdv{h}{x} f(X_t) + \frac{1}{2} \pdvn{2}{h}{x} g^2(X_t) \bigg) \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} \\ &= \hat{L}\{h(X_t)\} \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} \end{aligned}

Where we have recognized the definition of L^\hat{L}. Integrating the above equation yields:

Yt=Y0+0tL^{h(Xs)}ds+0τhxg(Xs)dBs\begin{aligned} Y_t = Y_0 + \int_0^t \hat{L}\{h(X_s)\} \dd{s} + \int_0^\tau \pdv{h}{x} g(X_s) \dd{B_s} \end{aligned}

As always, the latter Itō integral is a martingale, so it vanishes when we take the expectation conditioned on the “initial” state X0X_0, leaving:

E[YtX0]=Y0+E[0tL^{h(Xs)}dsX0]\begin{aligned} \mathbf{E}[Y_t | X_0] = Y_0 + \mathbf{E}\bigg[ \int_0^t \hat{L}\{h(X_s)\} \dd{s} \bigg| X_0 \bigg] \end{aligned}

For suffiently small tt, the integral can be replaced by its first-order approximation:

E[YtX0]Y0+L^{h(X0)}t\begin{aligned} \mathbf{E}[Y_t | X_0] \approx Y_0 + \hat{L}\{h(X_0)\} \: t \end{aligned}

Rearranging this gives the following, to be understood in the limit t0+t \to 0^+:

L^{h(X0)}1tE[YtY0X0]\begin{aligned} \hat{L}\{h(X_0)\} \approx \frac{1}{t} \mathbf{E}[Y_t - Y_0| X_0] \end{aligned}

The general definition of resembles that of a classical derivative, and indeed, the generator A^\hat{A} can be thought of as a differential operator. In that case, we would like an analogue of the classical fundamental theorem of calculus to relate it to integration.

Such an analogue is provided by Dynkin’s formula: for a stopping time τ\tau with a finite expected value E[τX0]<\mathbf{E}[\tau|X_0] < \infty, it states that:

E[h(Xτ)X0]=h(X0)+E[0τL^{h(Xt)}dtX0]\begin{aligned} \boxed{ \mathbf{E}\big[ h(X_\tau) | X_0 \big] = h(X_0) + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg] } \end{aligned}

The proof is similar to the one above. Define Yt=h(Xt)Y_t = h(X_t) and use Itō’s lemma:

dYt=(hxf(Xt)+122hx2g2(Xt))dt+hxg(Xt)dBt=L^{h(Xt)}dt+hxg(Xt)dBt\begin{aligned} \dd{Y_t} &= \bigg( \pdv{h}{x} f(X_t) + \frac{1}{2} \pdvn{2}{h}{x} g^2(X_t) \bigg) \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} \\ &= \hat{L} \{h(X_t)\} \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t} \end{aligned}

And then integrate this from t=0t = 0 to the provided stopping time t=τt = \tau:

Yτ=Y0+0τL^{h(Xt)}dt+0τhxg(Xt)dBt\begin{aligned} Y_\tau = Y_0 + \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} + \int_0^\tau \pdv{h}{x} g(X_t) \dd{B_t} \end{aligned}

All Itō integrals are martingales, so the latter integral’s conditional expectation is zero for the “initial” condition X0X_0. The rest of the above equality is also a martingale:

0=E[YτY00τL^{h(Xt)}dtX0]\begin{aligned} 0 = \mathbf{E}\bigg[ Y_\tau - Y_0 - \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg] \end{aligned}

Isolating this equation for E[Yτ ⁣ ⁣X0]\mathbf{E}[Y_\tau \!\mid\! X_0] then gives Dynkin’s formula.

A common application of Dynkin’s formula is predicting when the stopping time τ\tau occurs, and in what state XτX_\tau this happens. Consider an example: for a region Ω\Omega of state space with X0ΩX_0 \in \Omega, we define the exit time τinf{t:XtΩ}\tau \equiv \inf\{ t : X_t \notin \Omega \}, provided that E[τX0]<\mathbf{E}[\tau | X_0] < \infty.

To get information about when and where XtX_t exits Ω\Omega, we define the general reward Γ\Gamma as follows, consisting of a running reward RR for XtX_t inside Ω\Omega, and a terminal reward TT on the boundary Ω\partial \Omega where we stop at XτX_\tau:

Γ=0τR(Xt)dt+T(Xτ)\begin{aligned} \Gamma = \int_0^\tau R(X_t) \dd{t} + \: T(X_\tau) \end{aligned}

For example, for R=1R = 1 and T=0T = 0, this becomes Γ=τ\Gamma = \tau, and if R=0R = 0, then T(Xτ)T(X_\tau) can tell us the exit point. Let us now define h(X0)=E[ΓX0]h(X_0) = \mathbf{E}[\Gamma | X_0], and apply Dynkin’s formula:

E[h(Xτ)X0]=E[ΓX0]+E[0τL^{h(Xt)}dtX0]=E[T(Xτ)X0]+E[0τL^{h(Xt)}+R(Xt)dtX0]\begin{aligned} \mathbf{E}\big[ h(X_\tau) | X_0 \big] &= \mathbf{E}\big[ \Gamma \big| X_0 \big] + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg] \\ &= \mathbf{E}\big[ T(X_\tau) | X_0 \big] + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} + R(X_t) \dd{t} \bigg| X_0 \bigg] \end{aligned}

The two leftmost terms depend on the exit point XτX_\tau, but not directly on XtX_t for t<τt < \tau, while the rightmost depends on the whole trajectory XtX_t. Therefore, the above formula is fulfilled if h(x)h(x) satisfies the following equation and boundary conditions:

{L^{h(x)}+R(x)=0for  xΩh(x)=T(x)for  xΩ\begin{aligned} \boxed{ \begin{cases} \hat{L}\{h(x)\} + R(x) = 0 & \mathrm{for}\; x \in \Omega \\ h(x) = T(x) & \mathrm{for}\; x \notin \Omega \end{cases} } \end{aligned}

In other words, we have just turned a difficult question about a stochastic trajectory XtX_t into a classical differential boundary value problem for h(x)h(x).


  1. U.H. Thygesen, Lecture notes on diffusions and stochastic differential equations, 2021, Polyteknisk Kompendie.