summaryrefslogtreecommitdiff
path: root/content/know/concept/dynkins-formula/index.pdc
blob: a6aa2c4250b65d2d85c7a406013ce63102323918 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
---
title: "Dynkin's formula"
firstLetter: "D"
publishDate: 2021-11-28
categories:
- Mathematics
- Stochastic analysis

date: 2021-11-26T10:10:09+01:00
draft: false
markup: pandoc
---

# Dynkin's formula

Given an [Itō diffusion](/know/concept/ito-calculus/) $X_t$
with a time-independent drift $f$ and intensity $g$
such that the diffusion uniquely exists on the $t$-axis.
We define the **infinitesimal generator** $\hat{A}$
as an operator with the following action on a given function $h(x)$,
where $\mathbf{E}$ is a
[conditional expectation](/know/concept/conditional-expectation/):

$$\begin{aligned}
    \boxed{
        \hat{A}\{h(X_0)\}
        \equiv \lim_{t \to 0^+} \bigg[ \frac{1}{t} \mathbf{E}\Big[ h(X_t) - h(X_0) \Big| X_0 \Big] \bigg]
    }
\end{aligned}$$

Which only makes sense for $h$ where this limit exists.
The assumption that $X_t$ does not have any explicit time-dependence
means that $X_0$ need not be the true initial condition;
it can also be the state $X_s$ at any $s$ infinitesimally smaller than $t$.

Conveniently, for a sufficiently well-behaved $h$,
the generator $\hat{A}$ is identical to the Kolmogorov operator $\hat{L}$
found in the [backward Kolmogorov equation](/know/concept/kolmogorov-equations/):

$$\begin{aligned}
    \boxed{
        \hat{A}\{h(x)\}
        = \hat{L}\{h(x)\}
    }
\end{aligned}$$

<div class="accordion">
<input type="checkbox" id="proof-kolmogorov"/>
<label for="proof-kolmogorov">Proof</label>
<div class="hidden">
<label for="proof-kolmogorov">Proof.</label>
We define a new process $Y_t \equiv h(X_t)$, and then apply Itō's lemma, leading to:

$$\begin{aligned}
    \dd{Y_t}
    &= \bigg( \pdv{h}{x} f(X_t) + \frac{1}{2} \pdv[2]{h}{x} g^2(X_t) \bigg) \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t}
    \\
    &= \hat{L}\{h(X_t)\} \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t}
\end{aligned}$$

Where we have recognized the definition of $\hat{L}$.
Integrating the above equation yields:

$$\begin{aligned}
    Y_t
    = Y_0 + \int_0^t \hat{L}\{h(X_s)\} \dd{s} + \int_0^\tau \pdv{h}{x} g(X_s) \dd{B_s}
\end{aligned}$$

As always, the latter [Itō integral](/know/concept/ito-integral/)
is a [martingale](/know/concept/martingale/), so it vanishes
when we take the expectation conditioned on the "initial" state $X_0$, leaving:

$$\begin{aligned}
    \mathbf{E}[Y_t | X_0]
    = Y_0 + \mathbf{E}\bigg[ \int_0^t \hat{L}\{h(X_s)\} \dd{s} \bigg| X_0 \bigg]
\end{aligned}$$

For suffiently small $t$, the integral can be replaced by its first-order approximation:

$$\begin{aligned}
    \mathbf{E}[Y_t | X_0]
    \approx Y_0 + \hat{L}\{h(X_0)\} \: t
\end{aligned}$$

Rearranging this gives the following,
to be understood in the limit $t \to 0^+$:

$$\begin{aligned}
    \hat{L}\{h(X_0)\}
    \approx \frac{1}{t} \mathbf{E}[Y_t - Y_0| X_0]
\end{aligned}$$
</div>
</div>

The general definition of resembles that of a classical derivative,
and indeed, the generator $\hat{A}$ can be thought of as a differential operator.
In that case, we would like an analogue of the classical
fundamental theorem of calculus to relate it to integration.

Such an analogue is provided by **Dynkin's formula**:
for a stopping time $\tau$ with a finite expected value $\mathbf{E}[\tau|X_0] < \infty$,
it states that:

$$\begin{aligned}
    \boxed{
        \mathbf{E}\big[ h(X_\tau) | X_0 \big]
        = h(X_0) + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg]
    }
\end{aligned}$$

<div class="accordion">
<input type="checkbox" id="proof-dynkin"/>
<label for="proof-dynkin">Proof</label>
<div class="hidden">
<label for="proof-dynkin">Proof.</label>
The proof is similar to the one above.
Define $Y_t = h(X_t)$ and use Itō’s lemma:

$$\begin{aligned}
    \dd{Y_t}
    &= \bigg( \pdv{h}{x} f(X_t) + \frac{1}{2} \pdv[2]{h}{x} g^2(X_t) \bigg) \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t}
    \\
    &= \hat{L} \{h(X_t)\} \dd{t} + \pdv{h}{x} g(X_t) \dd{B_t}
\end{aligned}$$

And then integrate this from $t = 0$ to the provided stopping time $t = \tau$:

$$\begin{aligned}
    Y_\tau
    = Y_0 + \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} + \int_0^\tau \pdv{h}{x} g(X_t) \dd{B_t}
\end{aligned}$$

All [Itō integrals](/know/concept/ito-integral/)
are [martingales](/know/concept/martingale/),
so the latter integral's conditional expectation is zero for the "initial" condition $X_0$.
The rest of the above equality is also a martingale:

$$\begin{aligned}
    0
    = \mathbf{E}\bigg[ Y_\tau - Y_0 - \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg]
\end{aligned}$$

Isolating this equation for $\mathbf{E}[Y_\tau | X_0]$ then gives Dynkin's formula.
</div>
</div>

A common application of Dynkin's formula is predicting
when the stopping time $\tau$ occurs, and in what state $X_\tau$ this happens.
Consider an example:
for a region $\Omega$ of state space with $X_0 \in \Omega$,
we define the exit time $\tau \equiv \inf\{ t : X_t \notin \Omega \}$,
provided that $\mathbf{E}[\tau | X_0] < \infty$.

To get information about when and where $X_t$ exits $\Omega$,
we define the *general reward* $\Gamma$ as follows,
consisting of a *running reward* $R$ for $X_t$ inside $\Omega$,
and a *terminal reward* $T$ on the boundary $\partial \Omega$ where we stop at $X_\tau$:

$$\begin{aligned}
    \Gamma
    = \int_0^\tau R(X_t) \dd{t} + \: T(X_\tau)
\end{aligned}$$

For example, for $R = 1$ and $T = 0$, this becomes $\Gamma = \tau$,
and if $R = 0$, then $T(X_\tau)$ can tell us the exit point.
Let us now define $h(X_0) = \mathbf{E}[\Gamma | X_0]$,
and apply Dynkin's formula:

$$\begin{aligned}
    \mathbf{E}\big[ h(X_\tau) | X_0 \big]
    &= \mathbf{E}\big[ \Gamma \big| X_0 \big] + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} \dd{t} \bigg| X_0 \bigg]
    \\
    &= \mathbf{E}\big[ T(X_\tau) | X_0 \big] + \mathbf{E}\bigg[ \int_0^\tau \hat{L}\{h(X_t)\} + R(X_t) \dd{t} \bigg| X_0 \bigg]
\end{aligned}$$

The two leftmost terms depend on the exit point $X_\tau$,
but not directly on $X_t$ for $t < \tau$,
while the rightmost depends on the whole trajectory $X_t$.
Therefore, the above formula is fulfilled
if $h(x)$ satisfies the following equation and boundary conditions:

$$\begin{aligned}
    \boxed{
        \begin{cases}
            \hat{L}\{h(x)\} + R(x) = 0 & \mathrm{for}\; x \in \Omega \\
            h(x) = T(x) & \mathrm{for}\; x \notin \Omega
        \end{cases}
    }
\end{aligned}$$

In other words, we have just turned a difficult question about a stochastic trajectory $X_t$
into a classical differential boundary value problem for $h(x)$.



## References
1.  U.H. Thygesen,
    *Lecture notes on diffusions and stochastic differential equations*,
    2021, Polyteknisk Kompendie.