From 62759ea3f910fae2617d033bf8f878d7574f4edd Mon Sep 17 00:00:00 2001
From: Prefetch
Date: Sun, 7 Nov 2021 19:34:18 +0100
Subject: Expand knowledge base, reorganize measure theory, update gitignore
---
.../concept/gronwall-bellman-inequality/index.pdc | 210 +++++++++++++++++++++
content/know/concept/ito-calculus/index.pdc | 180 ++++++++++++++++--
content/know/concept/ito-integral/index.pdc | 7 +-
content/know/concept/martingale/index.pdc | 20 +-
content/know/concept/random-variable/index.pdc | 40 +++-
content/know/concept/sigma-algebra/index.pdc | 61 ------
content/know/concept/stochastic-process/index.pdc | 62 ++++++
content/know/concept/wiener-process/index.pdc | 32 +---
.../know/concept/young-dupre-relation/index.pdc | 2 +-
9 files changed, 492 insertions(+), 122 deletions(-)
create mode 100644 content/know/concept/gronwall-bellman-inequality/index.pdc
create mode 100644 content/know/concept/stochastic-process/index.pdc
(limited to 'content/know')
diff --git a/content/know/concept/gronwall-bellman-inequality/index.pdc b/content/know/concept/gronwall-bellman-inequality/index.pdc
new file mode 100644
index 0000000..1f093ae
--- /dev/null
+++ b/content/know/concept/gronwall-bellman-inequality/index.pdc
@@ -0,0 +1,210 @@
+---
+title: "Grönwall-Bellman inequality"
+firstLetter: "G"
+publishDate: 2021-11-07
+categories:
+- Mathematics
+
+date: 2021-11-07T09:51:57+01:00
+draft: false
+markup: pandoc
+---
+
+# Grönwall-Bellman inequality
+
+Suppose we have a first-order ordinary differential equation
+for some function $u(t)$, and that it can be shown from this equation
+that the derivative $u'(t)$ is bounded as follows:
+
+$$\begin{aligned}
+ u'(t)
+ \le \beta(t) \: u(t)
+\end{aligned}$$
+
+Where $\beta(t)$ is known.
+Then **Grönwall's inequality** states that the solution $u(t)$ is bounded:
+
+$$\begin{aligned}
+ \boxed{
+ u(t)
+ \le u(0) \exp\!\bigg( \int_0^t \beta(s) \dd{s} \bigg)
+ }
+\end{aligned}$$
+
+
+
+
+
+
+We define $w(t)$ to equal the upper bounds above
+on both $w'(t)$ and $w(t)$ itself:
+
+$$\begin{aligned}
+ w(t)
+ \equiv u(0) \exp\!\bigg( \int_0^t \beta(s) \dd{s} \bigg)
+ \quad \implies \quad
+ w'(t)
+ = \beta(t) \: w(t)
+\end{aligned}$$
+
+Where $w(0) = u(0)$.
+The goal is to show the following for all $t$:
+
+$$\begin{aligned}
+ \frac{u(t)}{w(t)} \le 1
+\end{aligned}$$
+
+For $t = 0$, this is trivial, since $w(0) = u(0)$ by definition.
+For $t > 0$, we want $w(t)$ to grow at least as fast as $u(t)$
+in order to satisfy the inequality.
+We thus calculate:
+
+$$\begin{aligned}
+ \dv{t} \bigg( \frac{u}{w} \bigg)
+ = \frac{u' w - u w'}{w^2}
+ = \frac{u' w - u \beta w}{w^2}
+ = \frac{u' - u \beta}{w}
+\end{aligned}$$
+
+Since $u' \le \beta u$ as a condition,
+the above derivative is always negative.
+
+
+
+Grönwall's inequality can be generalized to non-differentiable functions.
+Suppose we know:
+
+$$\begin{aligned}
+ u(t)
+ \le \alpha(t) + \int_0^t \beta(s) \: u(s) \dd{s}
+\end{aligned}$$
+
+Where $\alpha(t)$ and $\beta(t)$ are known.
+Then the **Grönwall-Bellman inequality** states that:
+
+$$\begin{aligned}
+ \boxed{
+ u(t)
+ \le \alpha(t) + \int_0^t \alpha(s) \: \beta(s) \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg) \dd{s}
+ }
+\end{aligned}$$
+
+
+
+
+
+
+We start by defining $w(t)$ as follows,
+which will act as shorthand:
+
+$$\begin{aligned}
+ w(t)
+ \equiv \exp\!\bigg( \!-\!\! \int_0^t \beta(s) \dd{s} \bigg) \bigg( \int_0^t \beta(s) \: u(s) \dd{s} \bigg)
+\end{aligned}$$
+
+Its derivative $w'(t)$ is then straightforwardly calculated to be given by:
+
+$$\begin{aligned}
+ w'(t)
+ &= \bigg( \dv{t}\! \int_0^t \beta(s) \: u(s) \dd{s} - \beta(t)\int_0^t \beta(s) \: u(s) \dd{s} \bigg)
+ \exp\!\bigg( \!-\!\! \int_0^t \beta(s) \dd{s} \bigg)
+ \\
+ &= \beta(t) \bigg( u(t) - \int_0^t \beta(s) \: u(s) \dd{s} \bigg)
+ \exp\!\bigg( \!-\!\! \int_0^t \beta(s) \dd{s} \bigg)
+\end{aligned}$$
+
+The parenthesized expression it bounded from above by $\alpha(t)$,
+thanks to the condition that $u(t)$ is assumed to satisfy,
+for the Grönwall-Bellman inequality to be true:
+
+$$\begin{aligned}
+ w'(t)
+ \le \alpha(t) \: \beta(t) \exp\!\bigg( \!-\!\! \int_0^t \beta(s) \dd{s} \bigg)
+\end{aligned}$$
+
+Integrating this to find $w(t)$ yields the following result:
+
+$$\begin{aligned}
+ w(t)
+ \le \int_0^t \alpha(s) \: \beta(s) \exp\!\bigg( \!-\!\! \int_0^s \beta(r) \dd{r} \bigg) \dd{s}
+\end{aligned}$$
+
+In the initial definition of $w(t)$,
+we now move the exponential to the other side,
+and rewrite it using the above inequality for $w(t)$:
+
+$$\begin{aligned}
+ \int_0^t \beta(s) \: u(s) \dd{s}
+ &= w(t) \exp\!\bigg( \int_0^t \beta(s) \dd{s} \bigg)
+ \\
+ &\le \int_0^t \alpha(s) \: \beta(s) \exp\!\bigg( \int_0^t \beta(r) \dd{r} \bigg) \exp\!\bigg( \!-\!\! \int_0^s \beta(r) \dd{r} \bigg) \dd{s}
+ \\
+ &\le \int_0^t \alpha(s) \: \beta(s) \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg)
+\end{aligned}$$
+
+Insert this into the condition under which the Grönwall-Bellman inequality holds.
+
+
+
+In the special case where $\alpha(t)$ is non-decreasing with $t$,
+the inequality reduces to:
+
+$$\begin{aligned}
+ \boxed{
+ u(t)
+ \le \alpha(t) \exp\!\bigg( \int_0^t \beta(s) \dd{s} \bigg)
+ }
+\end{aligned}$$
+
+
+
+
+
+
+Starting from the "ordinary" Grönwall-Bellman inequality,
+the fact that $\alpha(t)$ is non-decreasing tells us that
+$\alpha(s) \le \alpha(t)$ for all $s \le t$, so:
+
+$$\begin{aligned}
+ u(t)
+ &\le \alpha(t) + \int_0^t \alpha(s) \: \beta(s) \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg) \dd{s}
+ \\
+ &\le \alpha(t) + \alpha(t) \int_0^t \beta(s) \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg) \dd{s}
+\end{aligned}$$
+
+Now, consider the following straightfoward identity, involving the exponential:
+
+$$\begin{aligned}
+ \dv{s} \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg)
+ &= - \beta(s) \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg)
+\end{aligned}$$
+
+By inserting this into Grönwall-Bellman inequality, we arrive at:
+
+$$\begin{aligned}
+ u(t)
+ &\le \alpha(t) - \alpha(t) \int_0^t \dv{s} \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg) \dd{s}
+ \\
+ &\le \alpha(t) - \alpha(t) \bigg[ \int \dv{s} \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg) \dd{s} \bigg]_{s = 0}^{s = t}
+\end{aligned}$$
+
+Where we have converted the outer integral from definite to indefinite.
+Continuing:
+
+$$\begin{aligned}
+ u(t)
+ &\le \alpha(t) - \alpha(t) \bigg[ \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg) \bigg]_{s = 0}^{s = t}
+ \\
+ &\le \alpha(t) - \alpha(t) \exp\!\bigg( \int_t^t \beta(r) \dd{r} \bigg) + \alpha(t) \exp\!\bigg( \int_0^t \beta(r) \dd{r} \bigg)
+ \\
+ &\le \alpha(t) - \alpha(t) + \alpha(t) \exp\!\bigg( \int_0^t \beta(r) \dd{r} \bigg)
+\end{aligned}$$
+
+
+
+
+
+## References
+1. U.H. Thygesen,
+ *Lecture notes on diffusions and stochastic differential equations*,
+ 2021, Polyteknisk Kompendie.
diff --git a/content/know/concept/ito-calculus/index.pdc b/content/know/concept/ito-calculus/index.pdc
index 576e09a..3527b1d 100644
--- a/content/know/concept/ito-calculus/index.pdc
+++ b/content/know/concept/ito-calculus/index.pdc
@@ -12,10 +12,10 @@ markup: pandoc
# Itō calculus
-Given two time-indexed [random variables](/know/concept/random-variable/)
-(i.e. stochastic processes) $F_t$ and $G_t$,
-then consider the following random variable $X_t$,
-where $B_t$ is the [Wiener process](/know/concept/wiener-process/):
+Given two [stochastic processes](/know/concept/stochastic-process/)
+$F_t$ and $G_t$, consider the following random variable $X_t$,
+where $B_t$ is the [Wiener process](/know/concept/wiener-process/),
+i.e. Brownian motion:
$$\begin{aligned}
X_t
@@ -27,7 +27,7 @@ assuming $G_t$ is Itō-integrable.
We call $X_t$ an **Itō process** if $F_t$ is locally integrable,
and the initial condition $X_0$ is known,
i.e. $X_0$ is $\mathcal{F}_0$-measurable,
-where $\mathcal{F}_t$ is the [filtration](/know/concept/sigma-algebra/)
+where $\mathcal{F}_t$ is the filtration
to which $F_t$, $G_t$ and $B_t$ are adapted.
The above definition of $X_t$ is often abbreviated as follows,
where $X_0$ is implicit:
@@ -39,8 +39,18 @@ $$\begin{aligned}
Typically, $F_t$ is referred to as the **drift** of $X_t$,
and $G_t$ as its **intensity**.
+Because the Itō integral of $G_t$ is a
+[martingale](/know/concept/martingale/),
+it does not contribute to the mean of $X_t$:
+
+$$\begin{aligned}
+ \mathbf{E}[X_t]
+ = \int_0^t \mathbf{E}[F_s] \dd{s}
+\end{aligned}$$
+
Now, consider the following **Itō stochastic differential equation** (SDE),
-where $\xi_t = \dv*{B_t}{t}$ is white noise:
+where $\xi_t = \dv*{B_t}{t}$ is white noise,
+informally treated as the $t$-derivative of $B_t$:
$$\begin{aligned}
\dv{X_t}{t}
@@ -51,15 +61,6 @@ An Itō process $X_t$ is said to satisfy this equation
if $f(X_t, t) = F_t$ and $g(X_t, t) = G_t$,
in which case $X_t$ is also called an **Itō diffusion**.
-Because the Itō integral of $G_t$ is a
-[martingale](/know/concept/martingale/),
-it does not contribute to the mean of $X_t$:
-
-$$\begin{aligned}
- \mathbf{E}[X_t]
- = \int_0^t \mathbf{E}[F_s] \dd{s}
-\end{aligned}$$
-
## Itō's lemma
@@ -204,9 +205,156 @@ $$\begin{aligned}
0
&= f(x) \: h'(x) + \frac{1}{2} g^2(x) \: h''(x)
\\
- &= \Big( f(x) - \frac{1}{2} g^2(x) \frac{2 f(x)}{g(x)} \Big) \exp\!\bigg( \!-\!\! \int_{x_1}^x \frac{2 f(y)}{g^2(y)} \dd{y} \bigg)
+ &= \Big( f(x) - \frac{1}{2} g^2(x) \frac{2 f(x)}{g^2(x)} \Big) \exp\!\bigg( \!-\!\! \int_{x_1}^x \frac{2 f(y)}{g^2(y)} \dd{y} \bigg)
+\end{aligned}$$
+
+
+## Existence and uniqueness
+
+It is worth knowing under what condition a solution to a given SDE exists,
+in the sense that it is finite on the entire time axis.
+Suppose the drift $f$ and intensity $g$ satisfy these inequalities,
+for some known constant $K$ and for all $x$:
+
+$$\begin{aligned}
+ x f(x) \le K (1 + x^2)
+ \qquad \quad
+ g^2(x) \le K (1 + x^2)
+\end{aligned}$$
+
+When this is satisfied, we can find the following upper bound
+on an Itō process $X_t$,
+which clearly implies that $X_t$ is finite for all $t$:
+
+$$\begin{aligned}
+ \boxed{
+ \mathbf{E}[X_t^2]
+ \le \big(X_0^2 + 3 K t\big) \exp\!\big(3 K t\big)
+ }
+\end{aligned}$$
+
+
+
+
+
+
+If we define $Y_t \equiv X_t^2$,
+then Itō's lemma tells us that the following holds:
+
+$$\begin{aligned}
+ \dd{Y_t}
+ = \big( 2 X_t \: f(X_t) + g^2(X_t) \big) \dd{t} + 2 X_t \: g(X_t) \dd{B_t}
\end{aligned}$$
+Integrating and taking the expectation value
+removes the Wiener term, leaving:
+
+$$\begin{aligned}
+ \mathbf{E}[Y_t]
+ = Y_0 + \mathbf{E}\! \int_0^t 2 X_s f(X_s) + g^2(X_s) \dd{s}
+\end{aligned}$$
+
+Given that $K (1 \!+\! x^2)$ is an upper bound of $x f(x)$ and $g^2(x)$,
+we get an inequality:
+
+$$\begin{aligned}
+ \mathbf{E}[Y_t]
+ &\le Y_0 + \mathbf{E}\! \int_0^t 2 K (1 \!+\! X_s^2) + K (1 \!+\! X_s^2) \dd{s}
+ \\
+ &\le Y_0 + \int_0^t 3 K (1 + \mathbf{E}[Y_s]) \dd{s}
+ \\
+ &\le Y_0 + 3 K t + \int_0^t 3 K \big( \mathbf{E}[Y_s] \big) \dd{s}
+\end{aligned}$$
+
+We then apply the
+[Grönwall-Bellman inequality](/know/concept/gronwall-bellman-inequality/),
+noting that $(Y_0 \!+\! 3 K t)$ does not decrease with time, leading us to:
+
+$$\begin{aligned}
+ \mathbf{E}[Y_t]
+ &\le (Y_0 + 3 K t) \exp\!\bigg( \int_0^t 3 K \dd{s} \bigg)
+ \\
+ &\le (Y_0 + 3 K t) \exp\!\big(3 K t\big)
+\end{aligned}$$
+
+
+
+If a solution exists, it is also worth knowing whether it is unique.
+Suppose that $f$ and $g$ satisfy the following inequalities,
+for some constant $K$ and for all $x$ and $y$:
+
+$$\begin{aligned}
+ \big| f(x) - f(y) \big| \le K \big| x - y \big|
+ \qquad \quad
+ \big| g(x) - g(y) \big| \le K \big| x - y \big|
+\end{aligned}$$
+
+Let $X_t$ and $Y_t$ both be solutions to a given SDE,
+but the initial conditions need not be the same,
+such that the difference is initially $X_0 \!-\! Y_0$.
+Then the difference $X_t \!-\! Y_t$ is bounded by:
+
+$$\begin{aligned}
+ \boxed{
+ \mathbf{E}\big[ (X_t - Y_t)^2 \big]
+ \le (X_0 - Y_0)^2 \exp\!\Big( \big(2 K \!+\! K^2 \big) t \Big)
+ }
+\end{aligned}$$
+
+
+
+
+
+
+We define $D_t \equiv X_t \!-\! Y_t$ and $Z_t \equiv D_t^2 \ge 0$,
+together with $F_t \equiv f(X_t) \!-\! f(Y_t)$ and $G_t \equiv g(X_t) \!-\! g(Y_t)$,
+such that Itō's lemma states:
+
+$$\begin{aligned}
+ \dd{Z_t}
+ = \big( 2 D_t F_t + G_t^2 \big) \dd{t} + 2 D_t G_t \dd{B_t}
+\end{aligned}$$
+
+Integrating and taking the expectation value
+removes the Wiener term, leaving:
+
+$$\begin{aligned}
+ \mathbf{E}[Z_t]
+ = Z_0 + \mathbf{E}\! \int_0^t 2 D_s F_s + G_s^2 \dd{s}
+\end{aligned}$$
+
+The *Cauchy-Schwarz inequality* states that $|D_s F_s| \le |D_s| |F_s|$,
+and then the given fact that $F_s$ and $G_s$ satisfy
+$|F_s| \le K |D_s|$ and $|G_s| \le K |D_s|$ gives:
+
+$$\begin{aligned}
+ \mathbf{E}[Z_t]
+ &\le Z_0 + \mathbf{E}\! \int_0^t 2 K D_s^2 + K^2 D_s^2 \dd{s}
+ \\
+ &\le Z_0 + \int_0^t (2 K \!+\! K^2) \: \mathbf{E}[Z_s] \dd{s}
+\end{aligned}$$
+
+Where we have implicitly used that $D_s F_s = |D_s F_s|$
+because $Z_t$ is positive for all $G_s^2$,
+and that $|D_s|^2 = D_s^2$ because $D_s$ is real.
+We then apply the
+[Grönwall-Bellman inequality](/know/concept/gronwall-bellman-inequality/),
+recognizing that $Z_0$ does not decrease with time (since it is constant):
+
+$$\begin{aligned}
+ \mathbf{E}[Z_t]
+ &\le Z_0 \exp\!\bigg( \int_0^t 2 K \!+\! K^2 \dd{s} \bigg)
+ \\
+ &\le Z_0 \exp\!\Big( \big( 2 K \!+\! K^2 \big) t \Big)
+\end{aligned}$$
+
+
+
+Using these properties, it can then be shown
+that if all of the above conditions are satisfied,
+then the SDE has a unique solution,
+which is $\mathcal{F}_t$-adapted, continuous, and exists for all times.
+
## References
diff --git a/content/know/concept/ito-integral/index.pdc b/content/know/concept/ito-integral/index.pdc
index ec49189..cbd4a91 100644
--- a/content/know/concept/ito-integral/index.pdc
+++ b/content/know/concept/ito-integral/index.pdc
@@ -13,9 +13,8 @@ markup: pandoc
# Itō integral
The **Itō integral** offers a way to integrate
-a time-indexed [random variable](/know/concept/random-variable/)
-$G_t$ (i.e. a stochastic process) with respect
-to a [Wiener process](/know/concept/wiener-process/) $B_t$,
+a given [stochastic process](/know/concept/stochastic-process/) $G_t$
+with respect to a [Wiener process](/know/concept/wiener-process/) $B_t$,
which is also a stochastic process.
The Itō integral $I_t$ of $G_t$ is defined as follows:
@@ -29,7 +28,7 @@ $$\begin{aligned}
Where have partitioned the time interval $[a, b]$ into steps of size $h$.
The above integral exists if $G_t$ and $B_t$ are adapted
-to a common [filtration](/know/concept/sigma-algebra) $\mathcal{F}_t$,
+to a common filtration $\mathcal{F}_t$,
and $\mathbf{E}[G_t^2]$ is integrable for $t \in [a, b]$.
If $I_t$ exists, $G_t$ is said to be **Itō-integrable** with respect to $B_t$.
diff --git a/content/know/concept/martingale/index.pdc b/content/know/concept/martingale/index.pdc
index 07ed1a4..21fa918 100644
--- a/content/know/concept/martingale/index.pdc
+++ b/content/know/concept/martingale/index.pdc
@@ -12,15 +12,14 @@ markup: pandoc
# Martingale
-A **martingale** is a type of stochastic process
-(i.e. a time-indexed [random variable](/know/concept/random-variable/))
+A **martingale** is a type of
+[stochastic process](/know/concept/stochastic-process/)
with important and useful properties,
especially for stochastic calculus.
For a stochastic process $\{ M_t : t \ge 0 \}$
-on a probability space $(\Omega, \mathcal{F}, P)$ with filtration $\{ \mathcal{F}_t \}$
-(see [$\sigma$-algebra](/know/concept/sigma-algebra/)),
-then $\{ M_t \}$ is a martingale if it satisfies all of the following:
+on a probability filtered space $(\Omega, \mathcal{F}, \{ \mathcal{F}_t \}, P)$,
+then $M_t$ is a martingale if it satisfies all of the following:
1. $M_t$ is $\mathcal{F}_t$-adapted, meaning
the filtration $\mathcal{F}_t$ contains enough information
@@ -33,19 +32,18 @@ then $\{ M_t \}$ is a martingale if it satisfies all of the following:
to be zero $\mathbf{E}(M_t \!-\! M_s | \mathcal{F}_s) = 0$.
The last condition is called the **martingale property**,
-and essentially means that a martingale is an unbiased random walk.
-Accordingly, the [Wiener process](/know/concept/wiener-process/) $\{ B_t \}$
-(Brownian motion) is a prime example of a martingale
-(with respect to its own filtration),
+and basically means that a martingale is an unbiased random walk.
+Accordingly, the [Wiener process](/know/concept/wiener-process/) $B_t$
+(Brownian motion) is an example of a martingale,
since each of its increments $B_t \!-\! B_s$ has mean $0$ by definition.
Modifying property (3) leads to two common generalizations.
-The stochastic process $\{ M_t \}$ above is a **submartingale**
+The stochastic process $M_t$ above is a **submartingale**
if the current value is a lower bound for the expectation:
3. For $0 \le s \le t$, the conditional expectation $\mathbf{E}(M_t | \mathcal{F}_s) \ge M_s$.
-Analogouly, $\{ M_t \}$ is a **supermartingale**
+Analogouly, $M_t$ is a **supermartingale**
if the current value is an upper bound instead:
3. For $0 \le s \le t$, the conditional expectation $\mathbf{E}(M_t | \mathcal{F}_s) \le M_s$.
diff --git a/content/know/concept/random-variable/index.pdc b/content/know/concept/random-variable/index.pdc
index 2a8643e..bc41744 100644
--- a/content/know/concept/random-variable/index.pdc
+++ b/content/know/concept/random-variable/index.pdc
@@ -73,7 +73,8 @@ $$\begin{aligned}
\quad \mathrm{for\:any\:} B \in \mathcal{B}(\mathbb{R}^n)
\end{aligned}$$
-In other words, for a given Borel set (see $\sigma$-algebra) $B \in \mathcal{B}(\mathbb{R}^n)$,
+In other words, for a given Borel set
+(see [$\sigma$-algebra](/know/concept/sigma-algebra/)) $B \in \mathcal{B}(\mathbb{R}^n)$,
the set of all outcomes $\omega \in \Omega$ that satisfy $X(\omega) \in B$
must form a valid event; this set must be in $\mathcal{F}$.
The point is that we need to be able to assign probabilities
@@ -94,7 +95,38 @@ $X^{-1}$ can be regarded as the inverse of $X$:
it maps $B$ to the event for which $X \in B$.
With this, our earlier requirement that $X$ be measurable
can be written as: $X^{-1}(B) \in \mathcal{F}$ for any $B \in \mathcal{B}(\mathbb{R}^n)$.
-This is also often stated as *"$X$ is $\mathcal{F}$-measurable"*.
+This is also often stated as "$X$ is *$\mathcal{F}$-measurable"*.
+
+Related to $\mathcal{F}$ is the **information**
+obtained by observing a random variable $X$.
+Let $\sigma(X)$ be the information generated by observing $X$,
+i.e. the events whose occurrence can be deduced from the value of $X$,
+or, more formally:
+
+$$\begin{aligned}
+ \sigma(X)
+ = X^{-1}(\mathcal{B}(\mathbb{R}^n))
+ = \{ A \in \mathcal{F} : A = X^{-1}(B) \mathrm{\:for\:some\:} B \in \mathcal{B}(\mathbb{R}^n) \}
+\end{aligned}$$
+
+In other words, if the realized value of $X$ is
+found to be in a certain Borel set $B \in \mathcal{B}(\mathbb{R}^n)$,
+then the preimage $X^{-1}(B)$ (i.e. the event yielding this $B$)
+is known to have occurred.
+
+In general, given any $\sigma$-algebra $\mathcal{H}$,
+a variable $Y$ is said to be *"$\mathcal{H}$-measurable"*
+if $\sigma(Y) \subseteq \mathcal{H}$,
+so that $\mathcal{H}$ contains at least
+all information extractable from $Y$.
+
+Note that $\mathcal{H}$ can be generated by another random variable $X$,
+i.e. $\mathcal{H} = \sigma(X)$.
+In that case, the **Doob-Dynkin lemma** states
+that $Y$ is only $\sigma(X)$-measurable
+if $Y$ can always be computed from $X$,
+i.e. there exists a function $f$ such that
+$Y(\omega) = f(X(\omega))$ for all $\omega \in \Omega$.
Now, we are ready to define some familiar concepts from probability theory.
The **cumulative distribution function** $F_X(x)$ is
@@ -163,6 +195,10 @@ $$\begin{aligned}
= \mathbf{E}[X^2] - \big(\mathbf{E}[X]\big)^2
\end{aligned}$$
+It is also possible to calculate expectation values and variances
+adjusted to some given event information:
+see [conditional expectation](/know/concept/conditional-expectation/).
+
## References
diff --git a/content/know/concept/sigma-algebra/index.pdc b/content/know/concept/sigma-algebra/index.pdc
index 96240ff..94e7306 100644
--- a/content/know/concept/sigma-algebra/index.pdc
+++ b/content/know/concept/sigma-algebra/index.pdc
@@ -42,9 +42,6 @@ Likewise, a **sub-$\sigma$-algebra**
is a sub-family of a certain $\mathcal{F}$,
which is a valid $\sigma$-algebra in its own right.
-
-## Notable applications
-
A notable $\sigma$-algebra is the **Borel algebra** $\mathcal{B}(\Omega)$,
which is defined when $\Omega$ is a metric space,
such as the real numbers $\mathbb{R}$.
@@ -54,64 +51,6 @@ and all the subsets of $\mathbb{R}$ obtained by countable sequences
of unions and intersections of those intervals.
The elements of $\mathcal{B}$ are **Borel sets**.
-
-
-Another example of a $\sigma$-algebra is the **information**
-obtained by observing a [random variable](/know/concept/random-variable/) $X$.
-Let $\sigma(X)$ be the information generated by observing $X$,
-i.e. the events whose occurrence can be deduced from the value of $X$:
-
-$$\begin{aligned}
- \sigma(X)
- = X^{-1}(\mathcal{B}(\mathbb{R}^n))
- = \{ A \in \mathcal{F} : A = X^{-1}(B) \mathrm{\:for\:some\:} B \in \mathcal{B}(\mathbb{R}^n) \}
-\end{aligned}$$
-
-In other words, if the realized value of $X$ is
-found to be in a certain Borel set $B \in \mathcal{B}(\mathbb{R}^n)$,
-then the preimage $X^{-1}(B)$ (i.e. the event yielding this $B$)
-is known to have occurred.
-
-Given a $\sigma$-algebra $\mathcal{H}$,
-a random variable $Y$ is said to be *"$\mathcal{H}$-measurable"*
-if $\sigma(Y) \subseteq \mathcal{H}$,
-meaning that $\mathcal{H}$ contains at least
-all information extractable from $Y$.
-
-Note that $\mathcal{H}$ can be generated by another random variable $X$,
-i.e. $\mathcal{H} = \sigma(X)$.
-In that case, the **Doob-Dynkin lemma** states
-that $Y$ is only $\sigma(X)$-measurable
-if $Y$ can always be computed from $X$,
-i.e. there exists a function $f$ such that
-$Y(\omega) = f(X(\omega))$ for all $\omega \in \Omega$.
-
-
-
-The concept of information can be extended for
-stochastic processes (i.e. time-indexed random variables):
-if $\{ X_t : t \ge 0 \}$ is a stochastic process,
-its **filtration** $\mathcal{F}_t$ contains all
-the information generated by $X_t$ up to the current time $t$:
-
-$$\begin{aligned}
- \mathcal{F}_t
- = \sigma(X_s : 0 \le s \le t)
-\end{aligned}$$
-
-In other words, $\mathcal{F}_t$ is the "accumulated" $\sigma$-algebra
-of all information extractable from $X_t$,
-and hence grows with time: $\mathcal{F}_s \subset \mathcal{F}_t$ for $s < t$.
-Given $\mathcal{F}_t$, all values $X_s$ for $s \le t$ can be computed,
-i.e. if you know $\mathcal{F}_t$, then the present and past of $X_t$ can be reconstructed.
-
-Given some filtration $\mathcal{H}_t$, a stochastic process $X_t$
-is said to be *"$\mathcal{H}_t$-adapted"*
-if $X_t$'s own filtration $\sigma(X_s : 0 \le s \le t) \subseteq \mathcal{H}_t$,
-meaning $\mathcal{H}_t$ contains enough information
-to determine the current and past values of $X_t$.
-Clearly, $X_t$ is always adapted to its own filtration.
-
## References
diff --git a/content/know/concept/stochastic-process/index.pdc b/content/know/concept/stochastic-process/index.pdc
new file mode 100644
index 0000000..5d50da8
--- /dev/null
+++ b/content/know/concept/stochastic-process/index.pdc
@@ -0,0 +1,62 @@
+---
+title: "Stochastic process"
+firstLetter: "S"
+publishDate: 2021-11-07
+categories:
+- Mathematics
+
+date: 2021-11-07T18:45:42+01:00
+draft: false
+markup: pandoc
+---
+
+# Stochastic process
+
+A **stochastic process** $X_t$ is a time-indexed
+[random variable](/know/concept/random-variable/),
+$\{ X_t : t > 0 \}$, i.e. a set of (usually correlated)
+random variables, each labelled with a unique timestamp $t$.
+
+Whereas "ordinary" random variables are defined on
+a probability space $(\Omega, \mathcal{F}, P)$,
+stochastic process are defined on
+a **filtered probability space** $(\Omega, \mathcal{F}, \{ \mathcal{F}_t \}, P)$.
+As before, $\Omega$ is the sample space,
+$\mathcal{F}$ is the event space,
+and $P$ is the probability measure.
+
+The **filtration** $\{ \mathcal{F}_t : t \ge 0 \}$
+is a time-indexed set of [$\sigma$-algebras](/know/concept/sigma-algebra/) on $\Omega$,
+which contains at least all the information generated
+by $X_t$ up to the current time $t$,
+and is a subset of $\mathcal{F}_t$:
+
+$$\begin{aligned}
+ \mathcal{F}
+ \supseteq \mathcal{F}_t
+ \supseteq \sigma(X_s : 0 \le s \le t)
+\end{aligned}$$
+
+In other words, $\mathcal{F}_t$ is the "accumulated" $\sigma$-algebra
+of all information extractable from $X_t$,
+and hence grows with time: $\mathcal{F}_s \subseteq \mathcal{F}_t$ for $s < t$.
+Given $\mathcal{F}_t$, all values $X_s$ for $s \le t$ can be computed,
+i.e. if you know $\mathcal{F}_t$, then the present and past of $X_t$ can be reconstructed.
+
+Given any filtration $\mathcal{H}_t$, a stochastic process $X_t$
+is said to be *"$\mathcal{H}_t$-adapted"*
+if $X_t$'s own filtration $\sigma(X_s : 0 \le s \le t) \subseteq \mathcal{H}_t$,
+meaning $\mathcal{H}_t$ contains enough information
+to determine the current and past values of $X_t$.
+Clearly, $X_t$ is always adapted to its own filtration.
+
+Filtration and their adaptations are very useful
+for working with stochastic processes,
+most notably for calculating [conditional expectations](/know/concept/conditional-expectation/).
+
+
+
+## References
+1. U.H. Thygesen,
+ *Lecture notes on diffusions and stochastic differential equations*,
+ 2021, Polyteknisk Kompendie.
diff --git a/content/know/concept/wiener-process/index.pdc b/content/know/concept/wiener-process/index.pdc
index 3602b44..f8610a2 100644
--- a/content/know/concept/wiener-process/index.pdc
+++ b/content/know/concept/wiener-process/index.pdc
@@ -13,14 +13,13 @@ markup: pandoc
# Wiener process
-The **Wiener process** is a stochastic process that provides
-a pure mathematical definition of the physical phenomenon of **Brownian motion**,
+The **Wiener process** is a [stochastic process](/know/concept/stochastic-process/)
+that provides a pure mathematical definition
+of the physical phenomenon of **Brownian motion**,
and hence is also called *Brownian motion*.
A Wiener process $B_t$ is defined as any
-time-indexed [random variable](/know/concept/random-variable/)
-$\{B_t: t \ge 0\}$ (i.e. stochastic process)
-that has the following properties:
+stochastic process $\{B_t: t \ge 0\}$ that satisfies:
1. Initial condition $B_0 = 0$.
2. Each **increment** of $B_t$ is independent of the past:
@@ -49,28 +48,7 @@ Another consequence is invariance under "time inversion",
by defining $\sqrt{\alpha} = t$, such that $W_t = t B_{1/t}$.
Despite being continuous by definition,
-the **total variation** $V(B)$ of $B_t$ is infinite
-(informally, the curve is infinitely long).
-For $t_i \in [0, 1]$ in $n$ steps of maximum size $\Delta t$:
-
-$$\begin{aligned}
- V_t
- = \lim_{\Delta t \to 0} \sup \sum_{i = 1}^n \big|B_{t_i} - B_{t_{i-1}}\big|
- = \infty
-\end{aligned}$$
-
-However, curiously, the **quadratic variation**, written as $[B]_t$,
-turns out to be deterministically finite and equal to $t$,
-while a differentiable function $f$ would have $[f]_t = 0$:
-
-$$\begin{aligned}
- \:[B]_t
- = \lim_{\Delta t \to 0} \sum_{i = 1}^n \big|B_{t_i} - B_{t_{i - 1}}\big|^2
- = t
-\end{aligned}$$
-
-Therefore, despite being continuous by definition,
-the Wiener process is not differentiable,
+the Wiener process is not differentiable in general,
not even in the mean square, because:
$$\begin{aligned}
diff --git a/content/know/concept/young-dupre-relation/index.pdc b/content/know/concept/young-dupre-relation/index.pdc
index d3f36cb..579bd5e 100644
--- a/content/know/concept/young-dupre-relation/index.pdc
+++ b/content/know/concept/young-dupre-relation/index.pdc
@@ -81,7 +81,7 @@ $$\begin{aligned}
= \alpha_{sg} - \alpha_{sl}
\end{aligned}$$
-At the edge of the droplet, imagine a small rectangular triangle
+At the edge of the droplet, imagine a small right-angled triangle
with one side $\dd{x}$ on the $x$-axis,
the hypotenuse on $y(x)$ having length $\dd{x} \sqrt{1 + (y')^2}$,
and the corner between them being the contact point with angle $\theta$.
--
cgit v1.2.3