summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--content/know/concept/conditional-expectation/index.pdc8
-rw-r--r--content/know/concept/holomorphic-function/index.pdc57
-rw-r--r--content/know/concept/ito-calculus/index.pdc5
-rw-r--r--content/know/concept/kolmogorov-equations/index.pdc209
-rw-r--r--content/know/concept/markov-process/index.pdc66
-rw-r--r--content/know/concept/martingale/index.pdc8
-rw-r--r--content/know/concept/matsubara-sum/index.pdc148
-rw-r--r--content/know/concept/residue-theorem/index.pdc77
-rw-r--r--content/know/concept/wiener-process/index.pdc6
9 files changed, 522 insertions, 62 deletions
diff --git a/content/know/concept/conditional-expectation/index.pdc b/content/know/concept/conditional-expectation/index.pdc
index 5bcc152..5a8f07e 100644
--- a/content/know/concept/conditional-expectation/index.pdc
+++ b/content/know/concept/conditional-expectation/index.pdc
@@ -77,10 +77,10 @@ $$\begin{aligned}
Recall that because $Y$ is a random variable,
$\mathbf{E}[X|Y] = f(Y)$ is too.
In other words, $f$ maps $Y$ to another random variable,
-which, due to the *Doob-Dynkin lemma*
-(see [$\sigma$-algebra](/know/concept/sigma-algebra/)),
-must mean that $\mathbf{E}[X|Y]$ is measurable with respect to $\sigma(Y)$.
-Intuitively, this makes some sense:
+which, thanks to the *Doob-Dynkin lemma*
+(see [random variable](/know/concept/random-variable/)),
+means that $\mathbf{E}[X|Y]$ is measurable with respect to $\sigma(Y)$.
+Intuitively, this makes sense:
$\mathbf{E}[X|Y]$ cannot contain more information about events
than the $Y$ it was calculated from.
diff --git a/content/know/concept/holomorphic-function/index.pdc b/content/know/concept/holomorphic-function/index.pdc
index 4b7221c..3e3984a 100644
--- a/content/know/concept/holomorphic-function/index.pdc
+++ b/content/know/concept/holomorphic-function/index.pdc
@@ -193,60 +193,3 @@ this proof works inductively for all higher orders $n$.
</div>
</div>
-
-## Residue theorem
-
-A function $f(z)$ is **meromorphic** if it is holomorphic except in
-a finite number of **simple poles**, which are points $z_p$ where
-$f(z_p)$ diverges, but where the product $(z - z_p) f(z)$ is non-zero and
-still holomorphic close to $z_p$.
-
-The **residue** $R_p$ of a simple pole $z_p$ is defined as follows, and
-represents the rate at which $f(z)$ diverges close to $z_p$:
-
-$$\begin{aligned}
- \boxed{
- R_p = \lim_{z \to z_p} (z - z_p) f(z)
- }
-\end{aligned}$$
-
-**Cauchy's residue theorem** generalizes Cauchy's integral theorem
-to meromorphic functions, and states that the integral of a contour $C$
-depends on the simple poles $p$ it encloses:
-
-$$\begin{aligned}
- \boxed{
- \oint_C f(z) \dd{z} = i 2 \pi \sum_{p} R_p
- }
-\end{aligned}$$
-
-<div class="accordion">
-<input type="checkbox" id="proof-res-theorem"/>
-<label for="proof-res-theorem">Proof</label>
-<div class="hidden">
-<label for="proof-res-theorem">Proof.</label>
-From the definition of a meromorphic function,
-we know that we can decompose $f(z)$ like so,
-where $h(z)$ is holomorphic and $p$ are all its poles:
-
-$$\begin{aligned}
- f(z) = h(z) + \sum_{p} \frac{R_p}{z - z_p}
-\end{aligned}$$
-
-We integrate this over a contour $C$ which contains all poles, and apply
-both Cauchy's integral theorem and Cauchy's integral formula to get:
-
-$$\begin{aligned}
- \oint_C f(z) \dd{z}
- &= \oint_C h(z) \dd{z} + \sum_{p} R_p \oint_C \frac{1}{z - z_p} \dd{z}
- = \sum_{p} R_p \: 2 \pi i
-\end{aligned}$$
-</div>
-</div>
-
-This theorem might not seem very useful,
-but in fact, thanks to some clever mathematical magic,
-it allows us to evaluate many integrals along the real axis,
-most notably [Fourier transforms](/know/concept/fourier-transform/).
-It can also be used to derive the [Kramers-Kronig relations](/know/concept/kramers-kronig-relations).
-
diff --git a/content/know/concept/ito-calculus/index.pdc b/content/know/concept/ito-calculus/index.pdc
index 3527b1d..7a80e2f 100644
--- a/content/know/concept/ito-calculus/index.pdc
+++ b/content/know/concept/ito-calculus/index.pdc
@@ -60,6 +60,9 @@ $$\begin{aligned}
An Itō process $X_t$ is said to satisfy this equation
if $f(X_t, t) = F_t$ and $g(X_t, t) = G_t$,
in which case $X_t$ is also called an **Itō diffusion**.
+All Itō diffusions are [Markov processes](/know/concept/markov-process/),
+since only the current value of $X_t$ determines the future,
+and $B_t$ is also a Markov process.
## Itō's lemma
@@ -80,7 +83,7 @@ known as **Itō's lemma**:
$$\begin{aligned}
\boxed{
\dd{Y_t}
- = \pdv{h}{t} \dd{t} + \bigg( \pdv{h}{x} F_t + \frac{1}{2} G_t^2 \pdv[2]{h}{x} \bigg) \dd{t} + \pdv{h}{x} G_t \dd{B_t}
+ = \bigg( \pdv{h}{t} + \pdv{h}{x} F_t + \frac{1}{2} \pdv[2]{h}{x} G_t^2 \bigg) \dd{t} + \pdv{h}{x} G_t \dd{B_t}
}
\end{aligned}$$
diff --git a/content/know/concept/kolmogorov-equations/index.pdc b/content/know/concept/kolmogorov-equations/index.pdc
new file mode 100644
index 0000000..331d803
--- /dev/null
+++ b/content/know/concept/kolmogorov-equations/index.pdc
@@ -0,0 +1,209 @@
+---
+title: "Kolmogorov equations"
+firstLetter: "K"
+publishDate: 2021-11-14
+categories:
+- Mathematics
+- Statistics
+
+date: 2021-11-13T21:05:30+01:00
+draft: false
+markup: pandoc
+---
+
+# Kolmogorov equations
+
+Consider the following general [Itō diffusion](/know/concept/ito-calculus/)
+$X_t \in \mathbb{R}$, which is assumed to satisfy
+the conditions for unique existence on the entire time axis:
+
+$$\begin{aligned}
+ \dd{X}_t
+ = f(X_t, t) \dd{t} + g(X_t, t) \dd{B_t}
+\end{aligned}$$
+
+Let $\mathcal{F}_t$ be the filtration to which $X_t$ is adapted,
+then we define $Y_s$ as shown below,
+namely as the [conditional expectation](/know/concept/conditional-expectation/)
+of $h(X_t)$, for an arbitrary bounded function $h(x)$,
+given the information $\mathcal{F}_s$ available at time $s \le t$.
+Because $X_t$ is a [Markov process](/know/concept/markov-process/),
+$Y_s$ must be $X_s$-measurable,
+so it is a function $k$ of $X_s$ and $s$:
+
+$$\begin{aligned}
+ Y_s
+ \equiv \mathbf{E}[h(X_t) | \mathcal{F}_s]
+ = \mathbf{E}[h(X_t) | X_s]
+ = k(X_s, s)
+\end{aligned}$$
+
+Consequently, we can apply Itō's lemma to find $\dd{Y_s}$
+in terms of $k$, $f$ and $g$:
+
+$$\begin{aligned}
+ \dd{Y_s}
+ &= \bigg( \pdv{k}{s} + \pdv{k}{x} f + \frac{1}{2} \pdv[2]{k}{x} g^2 \bigg) \dd{s} + \pdv{k}{x} g \dd{B_s}
+ \\
+ &= \bigg( \pdv{k}{s} + \hat{L} k \bigg) \dd{s} + \pdv{k}{x} g \dd{B_s}
+\end{aligned}$$
+
+Where we have defined the linear operator $\hat{L}$
+to have the following action on $k$:
+
+$$\begin{aligned}
+ \hat{L} k
+ \equiv \pdv{k}{x} f + \frac{1}{2} \pdv[2]{k}{x} g^2
+\end{aligned}$$
+
+At this point, we need to realize that $Y_s$ is
+a [martingale](/know/concept/martingale/) with respect to $\mathcal{F}_s$,
+since $Y_s$ is $\mathcal{F}_s$-adapted and finite,
+and it satisfies the martingale property,
+for $r \le s \le t$:
+
+$$\begin{aligned}
+ \mathbf{E}[Y_s | \mathcal{F}_r]
+ = \mathbf{E}\Big[ \mathbf{E}[h(X_t) | \mathcal{F}_s] \Big| \mathcal{F}_r \Big]
+ = \mathbf{E}\big[ h(X_t) \big| \mathcal{F}_r \big]
+ = Y_r
+\end{aligned}$$
+
+Where we used the tower property of conditional expectations,
+because $\mathcal{F}_r \subset \mathcal{F}_s$.
+However, an Itō diffusion can only be a martingale
+if its drift term (the one containing $\dd{s}$) vanishes,
+so, looking at $\dd{Y_s}$, we must demand that:
+
+$$\begin{aligned}
+ \pdv{k}{s} + \hat{L} k
+ = 0
+\end{aligned}$$
+
+Because $k(X_s, s)$ is a Markov process,
+we can write it with a transition density $p(s, X_s; t, X_t)$,
+where in this case $s$ and $X_s$ are given initial conditions,
+$t$ is a parameter, and the terminal state $X_t$ is a random variable.
+We thus have:
+
+$$\begin{aligned}
+ k(x, s)
+ = \int_{-\infty}^\infty p(s, x; t, y) \: h(y) \dd{y}
+\end{aligned}$$
+
+We insert this into the equation that we just derived for $k$, yielding:
+
+$$\begin{aligned}
+ 0
+ = \int_{-\infty}^\infty \!\! \Big( \pdv{s} p(s, x; t, y) + \hat{L} p(s, x; t, y) \Big) h(y) \dd{y}
+\end{aligned}$$
+
+Because $h$ is arbitrary, and this must be satisfied for all $h$,
+the transition density $p$ fulfills:
+
+$$\begin{aligned}
+ 0
+ = \pdv{s} p(s, x; t, y) + \hat{L} p(s, x; t, y)
+\end{aligned}$$
+
+Here, $t$ is a known parameter and $y$ is a "known" integration variable,
+leaving only $s$ and $x$ as free variables for us to choose.
+We therefore define the **likelihood function** $\psi(s, x)$,
+which gives the likelihood of an initial condition $(s, x)$
+given that the terminal condition is $(t, y)$:
+
+$$\begin{aligned}
+ \boxed{
+ \psi(s, x)
+ \equiv p(s, x; t, y)
+ }
+\end{aligned}$$
+
+And from the above derivation,
+we conclude that $\psi$ satisfies the following PDE,
+known as the **backward Kolmogorov equation**:
+
+$$\begin{aligned}
+ \boxed{
+ - \pdv{\psi}{s}
+ = \hat{L} \psi
+ = f \pdv{\psi}{x} + \frac{1}{2} g^2 \pdv[2]{\psi}{x}
+ }
+\end{aligned}$$
+
+Moving on, we can define the traditional
+**probability density function** $\phi(t, y)$ from the transition density $p$,
+by fixing the initial $(s, x)$
+and leaving the terminal $(t, y)$ free:
+
+$$\begin{aligned}
+ \boxed{
+ \phi(t, y)
+ \equiv p(s, x; t, y)
+ }
+\end{aligned}$$
+
+With this in mind, for $(s, x) = (0, X_0)$,
+the unconditional expectation $\mathbf{E}[Y_t]$
+(i.e. the conditional expectation without information)
+will be constant in time, because $Y_t$ is a martingale:
+
+$$\begin{aligned}
+ \mathbf{E}[Y_t]
+ = \mathbf{E}[k(X_t, t)]
+ = \int_{-\infty}^\infty k(y, t) \: \phi(t, y) \dd{y}
+ = \braket{k}{\phi}
+ = \mathrm{const}
+\end{aligned}$$
+
+This integral has the form of an inner product,
+so we switch to [Dirac notation](/know/concept/dirac-notation/).
+We differentiate with respect to $t$,
+and use the backward equation $\pdv*{k}{t} + \hat{L} k = 0$:
+
+$$\begin{aligned}
+ 0
+ = \pdv{t} \braket{k}{\phi}
+ = \braket{k}{\pdv{\phi}{t}} + \braket{\pdv{k}{t}}{\phi}
+ = \braket{k}{\pdv{\phi}{t}} - \braket{\hat{L} k}{\phi}
+ = \braket{k}{\pdv{\phi}{t} - \hat{L}{}^\dagger \phi}
+\end{aligned}$$
+
+Where $\hat{L}{}^\dagger$ is by definition the adjoint operator of $\hat{L}$,
+which we calculate using partial integration,
+where all boundary terms vanish thanks to the *existence* of $X_t$;
+in other words, $X_t$ cannot reach infinity at any finite $t$,
+so the integrand must decay to zero for $|y| \to \infty$:
+
+$$\begin{aligned}
+ \braket{\hat{L} k}{\phi}
+ &= \int_{-\infty}^\infty \pdv{k}{y} f \phi + \frac{1}{2} \pdv[2]{k}{y} g^2 \phi \dd{y}
+ \\
+ &= \bigg[ k f \phi + \frac{1}{2} \pdv{k}{y} g^2 \phi \bigg]_{-\infty}^\infty
+ - \int_{-\infty}^\infty k \pdv{y}(f \phi) + \frac{1}{2} \pdv{k}{y} \pdv{y}(g^2 \phi) \dd{y}
+ \\
+ &= \bigg[ -\frac{1}{2} k g^2 \phi \bigg]_{-\infty}^\infty
+ + \int_{-\infty}^\infty - k \pdv{y}(f \phi) + \frac{1}{2} k \pdv[2]{y}(g^2 \phi) \dd{y}
+ \\
+ &= \int_{-\infty}^\infty k \: \big( \hat{L}{}^\dagger \phi \big) \dd{y}
+ = \braket{k}{\hat{L}{}^\dagger \phi}
+\end{aligned}$$
+
+Since $k$ is arbitrary, and $\pdv*{\braket{k}{\phi}}{t} = 0$ for all $k$,
+we thus arrive at the **forward Kolmogorov equation**,
+describing the evolution of the probability density $\phi(t, y)$:
+
+$$\begin{aligned}
+ \boxed{
+ \pdv{\phi}{t}
+ = \hat{L}{}^\dagger \phi
+ = - \pdv{y}(f \phi) + \frac{1}{2} \pdv[2]{y}(g^2 \phi)
+ }
+\end{aligned}$$
+
+
+
+## References
+1. U.H. Thygesen,
+ *Lecture notes on diffusions and stochastic differential equations*,
+ 2021, Polyteknisk Kompendie.
diff --git a/content/know/concept/markov-process/index.pdc b/content/know/concept/markov-process/index.pdc
new file mode 100644
index 0000000..536aa00
--- /dev/null
+++ b/content/know/concept/markov-process/index.pdc
@@ -0,0 +1,66 @@
+---
+title: "Markov process"
+firstLetter: "M"
+publishDate: 2021-11-14
+categories:
+- Mathematics
+
+date: 2021-11-13T21:05:21+01:00
+draft: false
+markup: pandoc
+---
+
+# Markov process
+
+Given a [stochastic process](/know/concept/stochastic-process/)
+$\{X_t : t \ge 0\}$ on a filtered probability space
+$(\Omega, \mathcal{F}, \{\mathcal{F}_t\}, P)$,
+it is said to be a **Markov process**
+if it satisfies the following requirements:
+
+1. $X_t$ is $\mathcal{F}_t$-adapted,
+ meaning that the current and all past values of $X_t$
+ can be reconstructed from the filtration $\mathcal{F}_t$.
+2. For some function $h(x)$,
+ the [conditional expectation](/know/concept/conditional-expectation/)
+ $\mathbf{E}[h(X_t) | \mathcal{F}_s] = \mathbf{E}[h(X_t) | X_s]$,
+ i.e. at time $s \le t$, the expectation of $h(X_t)$ depends only on the current $X_s$.
+ Note that $h$ must be bounded and *Borel-measurable*,
+ meaning $\sigma(h(X_t)) \subseteq \mathcal{F}_t$.
+
+This last condition is called the **Markov property**,
+and demands that the future of $X_t$ does not depend on the past,
+but only on the present $X_s$.
+
+If both $t$ and $X_t$ are taken to be discrete,
+then $X_t$ is known as a **Markov chain**.
+This brings us to the concept of the **transition probability**
+$P(X_t \in A | X_s = x)$, which describes the probability that
+$X_t$ will be in a given set $A$, if we know that currently $X_s = x$.
+
+If $t$ and $X_t$ are continuous, we can often (but not always) express $P$
+using a **transition density** $p(s, x; t, y)$,
+which gives the probability density that the initial condition $X_s = x$
+will evolve into the terminal condition $X_t = y$.
+Then the transition probability $P$ can be calculated like so,
+where $B$ is a given Borel set (see [$\sigma$-algebra](/know/concept/sigma-algebra/)):
+
+$$\begin{aligned}
+ P(X_t \in B | X_s = x)
+ = \int_B p(s, x; t, y) \dd{y}
+\end{aligned}$$
+
+A prime examples of a continuous Markov process is
+the [Wiener process](/know/concept/wiener-process/).
+Note that this is also a [martingale](/know/concept/martingale/):
+often, a Markov process happens to be a martingale, or vice versa.
+However, those concepts are not to be confused:
+the Markov property does not specify *what* the expected future must be,
+and the martingale property says nothing about the history-dependence.
+
+
+
+## References
+1. U.H. Thygesen,
+ *Lecture notes on diffusions and stochastic differential equations*,
+ 2021, Polyteknisk Kompendie.
diff --git a/content/know/concept/martingale/index.pdc b/content/know/concept/martingale/index.pdc
index 21fa918..41c2709 100644
--- a/content/know/concept/martingale/index.pdc
+++ b/content/know/concept/martingale/index.pdc
@@ -37,6 +37,14 @@ Accordingly, the [Wiener process](/know/concept/wiener-process/) $B_t$
(Brownian motion) is an example of a martingale,
since each of its increments $B_t \!-\! B_s$ has mean $0$ by definition.
+Martingales are easily confused with
+[Markov processes](/know/concept/markov-process/),
+because stochastic processes will often be both,
+e.g. the Wiener process.
+However, these are distinct concepts:
+the martingale property says nothing about history-dependence,
+and the Markov property does not say *what* the future expectation should be.
+
Modifying property (3) leads to two common generalizations.
The stochastic process $M_t$ above is a **submartingale**
if the current value is a lower bound for the expectation:
diff --git a/content/know/concept/matsubara-sum/index.pdc b/content/know/concept/matsubara-sum/index.pdc
new file mode 100644
index 0000000..91183e6
--- /dev/null
+++ b/content/know/concept/matsubara-sum/index.pdc
@@ -0,0 +1,148 @@
+---
+title: "Matsubara sum"
+firstLetter: "M"
+publishDate: 2021-11-13
+categories:
+- Physics
+- Quantum mechanics
+
+date: 2021-11-05T15:19:38+01:00
+draft: false
+markup: pandoc
+---
+
+# Matsubara sum
+
+A **Matsubara sum** is a summation of the following form,
+which notably appears as the inverse
+[Fourier transform](/know/concept/fourier-transform/) of the
+[Matsubara Green's function](/know/concept/matsubara-greens-function/):
+
+$$\begin{aligned}
+ S_{B,F}
+ = \frac{1}{\hbar \beta} \sum_{i \omega_n} g(i \omega_n) \: e^{i \omega_n \tau}
+\end{aligned}$$
+
+Where $i \omega_n$ are the Matsubara frequencies
+for bosons ($B$) or fermions ($F$),
+and $g(z)$ is a function on the complex plane
+that is [holomorphic](/know/concept/holomorphic-function/)
+except for a known set of simple poles,
+and $\tau$ is a real parameter
+(e.g. the [imaginary time](/know/concept/imaginary-time/))
+satisfying $-\hbar \beta < \tau < \hbar \beta$.
+
+Now, consider the following integral
+over a (for now) unspecified counter-clockwise contour $C$,
+with a (for now) unspecified weighting function $h(z)$:
+
+$$\begin{aligned}
+ \oint_C \frac{g(z) h(z)}{2 \pi i} e^{z \tau} \dd{z}
+ = \sum_{z_p} e^{z_p \tau} \: \underset{z \to z_p}{\mathrm{Res}}\big( g(z) h(z) \big)
+\end{aligned}$$
+
+Where we have applied the residue theorem
+to get a sum over all simple poles $z_p$
+of either $g$ or $h$ (but not both) enclosed by $C$.
+Clearly, we could make this look like a Matsubara sum,
+if we choose $h$ such that it has poles at $i \omega_n$.
+
+Therefore, we choose the weighting function $h(z)$ as follows,
+where $n_B(z)$ is the [Bose-Einstein distribution](/know/concept/bose-einstein-distribution/),
+and $n_F(z)$ is the [Fermi-Dirac distribution](/know/concept/fermi-dirac-distribution/):
+
+$$\begin{aligned}
+ h(z)
+ =
+ \begin{cases}
+ n_{B,F}(z) & \mathrm{if}\; \tau \ge 0
+ \\
+ -n_{B,F}(-z) & \mathrm{if}\; \tau \le 0
+ \end{cases}
+ \qquad \qquad
+ n_{B,F}(z)
+ = \frac{1}{e^{\hbar \beta z} \mp 1}
+\end{aligned}$$
+
+The distinction between the signs of $\tau$ is needed
+to ensure that the integrand $h(z) e^{z \tau}$ decays for $|z| \to \infty$,
+both for $\Re(z) > 0$ and $\Re(z) < 0$.
+This choice of $h$ indeed has poles at the respective
+Matsubara frequencies $i \omega_n$ of bosons and fermions,
+and the residues are:
+
+$$\begin{aligned}
+ \underset{z \to i \omega_n}{\mathrm{Res}}\!\big( n_B(z) \big)
+ &= \lim_{z \to i \omega_n}\!\bigg( \frac{z - i \omega_n}{e^{\hbar \beta z} - 1} \bigg)
+ = \lim_{\eta \to 0}\!\bigg( \frac{i \omega_n + \eta - i \omega_n}{e^{i \hbar \beta \omega_n} e^{\hbar \beta \eta} - 1} \bigg)
+ \\
+ &= \lim_{\eta \to 0}\!\bigg( \frac{\eta}{e^{\hbar \beta \eta} - 1} \bigg)
+ = \lim_{\eta \to 0}\!\bigg( \frac{\eta}{1 + \hbar \beta \eta - 1} \bigg)
+ = \frac{1}{\hbar \beta}
+ \\
+ \underset{z \to i \omega_n}{\mathrm{Res}}\!\big( n_F(z) \big)
+ &= \lim_{z \to i \omega_n}\!\bigg( \frac{z - i \omega_n}{e^{\hbar \beta z} + 1} \bigg)
+ = \lim_{\eta \to 0}\!\bigg( \frac{i \omega_n + \eta - i \omega_n}{e^{i \hbar \beta \omega_n} e^{\hbar \beta \eta} + 1} \bigg)
+ \\
+ &= \lim_{\eta \to 0}\!\bigg( \frac{\eta}{e^{\hbar \beta \eta} + 1} \bigg)
+ = \lim_{\eta \to 0}\!\bigg( \frac{\eta}{- 1 - \hbar \beta \eta + 1} \bigg)
+ = - \frac{1}{\hbar \beta}
+\end{aligned}$$
+
+In the definition of $h$, the sign flip for $\tau \le 0$
+is introduced because negating the argument also negates the residues,
+i.e. $\mathrm{Res}\big( n_F(-z) \big) = -\mathrm{Res}\big( n_F(z) \big)$.
+With this $h$, our contour integral can be rewritten as follows:
+
+$$\begin{aligned}
+ \oint_C \frac{g(z) h(z)}{2 \pi i} e^{z \tau} \dd{z}
+ &= \sum_{z_p} e^{z_p \tau} n_{B,F}(z_p) \: \underset{z \to z_p}{\mathrm{Res}}\big( g(z) \big)
+ + \sum_{i \omega_n} e^{i \omega_n \tau} g(i \omega_n) \: \underset{z \to i \omega_n}{\mathrm{Res}}\!\big( n_{B,F}(z) \big)
+ \\
+ &= \sum_{z_p} e^{z_p \tau} n_{B,F}(z_p) \: \underset{z \to z_p}{\mathrm{Res}}\big( g(z) \big)
+ \pm \frac{1}{\hbar \beta} \sum_{i \omega_n} g(i \omega_n) \: e^{i \omega_n \tau}
+\end{aligned}$$
+
+Where $+$ is for bosons, and $-$ for fermions.
+Here, we recognize the last term as the Matsubara sum $S_{F,B}$,
+for which we isolate, yielding:
+
+$$\begin{aligned}
+ S_{B,F}
+ = \mp \sum_{z_p} e^{z_p \tau} n_{B,F}(z_p) \: \underset{z \to z_p}{\mathrm{Res}}\big( g(z) \big)
+ \pm \oint_C \frac{g(z) h(z)}{2 \pi i} e^{z \tau} \dd{z}
+\end{aligned}$$
+
+Now we must choose $C$. Assuming $g(z)$ does not interfere,
+we know that $h(z) e^{z \tau}$ decays to zero
+for $|z| \to \infty$, so a useful choice would be a circle of radius $R$.
+If we then let $R \to \infty$, the contour encloses
+the whole complex plane, including all of the integrand's poles.
+However, thanks to the integrand's decay,
+the resulting contour integral must vanish:
+
+$$\begin{aligned}
+ C
+ = R e^{i \theta}
+ \quad \implies \quad
+ \lim_{R \to \infty}
+ \oint_C g(z) \: h(z) \: e^{z \tau} \dd{z}
+ = 0
+\end{aligned}$$
+
+We thus arrive at the following results
+for bosonic and fermionic Matsubara sums $S_{B,F}$:
+
+$$\begin{aligned}
+ \boxed{
+ S_{B,F}
+ = \mp \sum_{z_p} e^{z_p \tau} n_{B,F}(z_p) \: \underset{{z \to z_p}}{\mathrm{Res}}\big(g(z)\big)
+ }
+\end{aligned}$$
+
+
+
+## References
+1. H. Bruus, K. Flensberg,
+ *Many-body quantum theory in condensed matter physics*,
+ 2016, Oxford.
diff --git a/content/know/concept/residue-theorem/index.pdc b/content/know/concept/residue-theorem/index.pdc
new file mode 100644
index 0000000..02a8ece
--- /dev/null
+++ b/content/know/concept/residue-theorem/index.pdc
@@ -0,0 +1,77 @@
+---
+title: "Residue theorem"
+firstLetter: "R"
+publishDate: 2021-11-13
+categories:
+- Mathematics
+- Complex analysis
+
+date: 2021-11-13T20:51:13+01:00
+draft: false
+markup: pandoc
+---
+
+# Residue theorem
+
+A function $f(z)$ is **meromorphic** if it is
+[holomorphic](/know/concept/holomorphic-function/)
+except in a finite number of **simple poles**,
+which are points $z_p$ where $f(z_p)$ diverges,
+but where the product $(z - z_p) f(z)$ is non-zero
+and still holomorphic close to $z_p$.
+In other words, $f(z)$ can be approximated close to $z_p$:
+
+$$\begin{aligned}
+ f(z)
+ \approx \frac{R_p}{z - z_p}
+\end{aligned}$$
+
+Where the **residue** $R_p$ of a simple pole $z_p$ is defined as follows, and
+represents the rate at which $f(z)$ diverges close to $z_p$:
+
+$$\begin{aligned}
+ \boxed{
+ R_p = \lim_{z \to z_p} (z - z_p) f(z)
+ }
+\end{aligned}$$
+
+**Cauchy's residue theorem** for meromorphic functions
+is a generalization of Cauchy's integral theorem for holomorphic functions,
+and states that the integral on a contour $C$
+purely depends on the simple poles $z_p$ enclosed by $C$:
+
+$$\begin{aligned}
+ \boxed{
+ \oint_C f(z) \dd{z} = i 2 \pi \sum_{z_p} R_p
+ }
+\end{aligned}$$
+
+<div class="accordion">
+<input type="checkbox" id="proof-res-theorem"/>
+<label for="proof-res-theorem">Proof</label>
+<div class="hidden">
+<label for="proof-res-theorem">Proof.</label>
+From the definition of a meromorphic function,
+we know that we can decompose $f(z)$ like so,
+where $h(z)$ is holomorphic and $z_p$ are all its poles:
+
+$$\begin{aligned}
+ f(z) = h(z) + \sum_{z_p} \frac{R_p}{z - z_p}
+\end{aligned}$$
+
+We integrate this over a contour $C$ which contains all poles, and apply
+both Cauchy's integral theorem and Cauchy's integral formula to get:
+
+$$\begin{aligned}
+ \oint_C f(z) \dd{z}
+ &= \oint_C h(z) \dd{z} + \sum_{p} R_p \oint_C \frac{1}{z - z_p} \dd{z}
+ = \sum_{p} R_p \: 2 \pi i
+\end{aligned}$$
+</div>
+</div>
+
+This theorem might not seem very useful,
+but in fact, by cleverly choosing the contour $C$,
+it lets us evaluate many integrals along the real axis,
+most notably [Fourier transforms](/know/concept/fourier-transform/).
+It can also be used to derive the [Kramers-Kronig relations](/know/concept/kramers-kronig-relations).
diff --git a/content/know/concept/wiener-process/index.pdc b/content/know/concept/wiener-process/index.pdc
index f8610a2..dc3892d 100644
--- a/content/know/concept/wiener-process/index.pdc
+++ b/content/know/concept/wiener-process/index.pdc
@@ -60,6 +60,12 @@ $$\begin{aligned}
= \infty
\end{aligned}$$
+Furthermore, the Wiener process is a good example
+of both a [martingale](/know/concept/martingale/)
+and a [Markov process](/know/concept/markov-process/),
+since each increment has mean zero (so it is a martingale),
+and all increments are independent (so it is a Markov process).
+
## References