summaryrefslogtreecommitdiff
path: root/content
diff options
context:
space:
mode:
authorPrefetch2021-06-02 13:28:53 +0200
committerPrefetch2021-06-02 13:28:53 +0200
commitcc295b5da8e3db4417523a507caf106d5839d989 (patch)
treed86d4898ac3fddceecff67dff047a3aa4aef784b /content
parentaab299218975a8e775cda26ce256ffb1fe36c863 (diff)
Introduce collapsible proofs to some articles
Diffstat (limited to 'content')
-rw-r--r--content/know/concept/binomial-distribution/index.pdc72
-rw-r--r--content/know/concept/convolution-theorem/index.pdc26
-rw-r--r--content/know/concept/curvilinear-coordinates/index.pdc154
-rw-r--r--content/know/concept/dirac-delta-function/index.pdc41
-rw-r--r--content/know/concept/heaviside-step-function/index.pdc27
-rw-r--r--content/know/concept/holomorphic-function/index.pdc71
-rw-r--r--content/know/concept/parsevals-theorem/index.pdc32
7 files changed, 271 insertions, 152 deletions
diff --git a/content/know/concept/binomial-distribution/index.pdc b/content/know/concept/binomial-distribution/index.pdc
index 70cc897..e644164 100644
--- a/content/know/concept/binomial-distribution/index.pdc
+++ b/content/know/concept/binomial-distribution/index.pdc
@@ -22,7 +22,7 @@ that $n$ out of the $N$ trials succeed:
$$\begin{aligned}
\boxed{
- P_N(n) = \binom{N}{n} \: p^n (1 - p)^{N - n}
+ P_N(n) = \binom{N}{n} \: p^n q^{N - n}
}
\end{aligned}$$
@@ -41,8 +41,20 @@ $$\begin{aligned}
The remaining factor $p^n (1 - p)^{N - n}$ is then just the
probability of attaining each microstate.
-To find the mean number of successes $\mu$,
-the trick is to treat $p$ and $q$ as independent:
+The expected or mean number of successes $\mu$ after $N$ trials is as follows:
+
+$$\begin{aligned}
+ \boxed{
+ \mu = N p
+ }
+\end{aligned}$$
+
+<div class="accordion">
+<input type="checkbox" id="proof-mean"/>
+<label for="proof-mean">Proof</label>
+<div class="hidden">
+<label for="proof-mean">Proof.</label>
+The trick is to treat $p$ and $q$ as independent until the last moment:
$$\begin{aligned}
\mu
@@ -54,16 +66,26 @@ $$\begin{aligned}
= N p (p + q)^{N - 1}
\end{aligned}$$
-By inserting $q = 1 - p$, we find the following expression for the mean:
+Inserting $q = 1 - p$ then gives the desired result.
+</div>
+</div>
+
+Meanwhile, we find the following variance $\sigma^2$,
+with $\sigma$ being the standard deviation:
$$\begin{aligned}
\boxed{
- \mu = N p
+ \sigma^2 = N p q
}
\end{aligned}$$
-Next, we use the same trick to calculate $\overline{n^2}$
-(the mean of the squared number of successes):
+<div class="accordion">
+<input type="checkbox" id="proof-var"/>
+<label for="proof-var">Proof</label>
+<div class="hidden">
+<label for="proof-var">Proof.</label>
+We use the same trick to calculate $\overline{n^2}$
+(the mean squared number of successes):
$$\begin{aligned}
\overline{n^2}
@@ -79,7 +101,7 @@ $$\begin{aligned}
&= N p + N^2 p^2 - N p^2
\end{aligned}$$
-Using this and the earlier expression for $\mu$, we find the variance $\sigma^2$:
+Using this and the earlier expression $\mu = N p$, we find the variance $\sigma^2$:
$$\begin{aligned}
\sigma^2
@@ -88,18 +110,26 @@ $$\begin{aligned}
= N p (1 - p)
\end{aligned}$$
-Once again, by inserting $q = 1 - p$, we find the following expression for the variance:
+By inserting $q = 1 - p$, we arrive at the desired expression.
+</div>
+</div>
+
+As $N \to \infty$, the binomial distribution
+turns into the continuous normal distribution:
$$\begin{aligned}
\boxed{
- \sigma^2 = N p q
+ \lim_{N \to \infty} P_N(n) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\!\Big(\!-\!\frac{(n - \mu)^2}{2 \sigma^2} \Big)
}
\end{aligned}$$
-As $N$ grows to infinity, the binomial distribution
-turns into the continuous normal distribution.
-We demonstrate this by taking the Taylor expansion of its
-natural logarithm $\ln\!\big(P_N(n)\big)$ around the mean $\mu = Np$:
+<div class="accordion">
+<input type="checkbox" id="proof-normal"/>
+<label for="proof-normal">Proof</label>
+<div class="hidden">
+<label for="proof-normal">Proof.</label>
+We take the Taylor expansion of $\ln\!\big(P_N(n)\big)$
+around the mean $\mu = Np$:
$$\begin{aligned}
\ln\!\big(P_N(n)\big)
@@ -108,7 +138,7 @@ $$\begin{aligned}
D_m(n) = \dv[m]{\ln\!\big(P_N(n)\big)}{n}
\end{aligned}$$
-We use Stirling's approximation to calculate all these factorials:
+We use Stirling's approximation to calculate the factorials in $D_m$:
$$\begin{aligned}
\ln\!\big(P_N(n)\big)
@@ -163,7 +193,7 @@ $$\begin{aligned}
= - \frac{1}{\sigma^2}
\end{aligned}$$
-The higher-order derivatives tend to zero for large $N$, so we discard them:
+The higher-order derivatives tend to zero for $N \to \infty$, so we discard them:
$$\begin{aligned}
D_3(n)
@@ -184,13 +214,9 @@ $$\begin{aligned}
= \ln\!\Big( \frac{1}{\sqrt{2\pi \sigma^2}} \Big) - \frac{(n - \mu)^2}{2 \sigma^2}
\end{aligned}$$
-Thus, as $N$ goes to infinity, the binomial distribution becomes a Gaussian:
-
-$$\begin{aligned}
- \boxed{
- \lim_{N \to \infty} P_N(n) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\!\Big(\!-\!\frac{(n - \mu)^2}{2 \sigma^2} \Big)
- }
-\end{aligned}$$
+Taking $\exp$ of this expression then yields a normalized Gaussian distribution.
+</div>
+</div>
## References
diff --git a/content/know/concept/convolution-theorem/index.pdc b/content/know/concept/convolution-theorem/index.pdc
index 9d1a666..1454cc0 100644
--- a/content/know/concept/convolution-theorem/index.pdc
+++ b/content/know/concept/convolution-theorem/index.pdc
@@ -32,7 +32,12 @@ $$\begin{aligned}
}
\end{aligned}$$
-To prove this, we expand the right-hand side of the theorem and
+<div class="accordion">
+<input type="checkbox" id="proof-fourier"/>
+<label for="proof-fourier">Proof</label>
+<div class="hidden">
+<label for="proof-fourier">Proof.</label>
+We expand the right-hand side of the theorem and
rearrange the integrals:
$$\begin{aligned}
@@ -45,8 +50,8 @@ $$\begin{aligned}
= A \cdot (f * g)(x)
\end{aligned}$$
-Then we do the same thing again, this time starting from a product in
-the $x$-domain:
+Then we do the same again,
+this time starting from a product in the $x$-domain:
$$\begin{aligned}
\hat{\mathcal{F}}\{f(x) \: g(x)\}
@@ -57,6 +62,8 @@ $$\begin{aligned}
&= B \int_{-\infty}^\infty \tilde{g}(k') \tilde{f}(k - k') \dd{k'}
= B \cdot (\tilde{f} * \tilde{g})(k)
\end{aligned}$$
+</div>
+</div>
## Laplace transform
@@ -76,9 +83,14 @@ $$\begin{aligned}
\boxed{\hat{\mathcal{L}}\{(f * g)(t)\} = \tilde{f}(s) \: \tilde{g}(s)}
\end{aligned}$$
-We prove this by expanding the left-hand side. Note that the lower
-integration limit is 0 instead of $-\infty$, because we set both $f(t)$
-and $g(t)$ to zero for $t < 0$:
+<div class="accordion">
+<input type="checkbox" id="proof-laplace"/>
+<label for="proof-laplace">Proof</label>
+<div class="hidden">
+<label for="proof-laplace">Proof.</label>
+We expand the left-hand side.
+Note that the lower integration limit is 0 instead of $-\infty$,
+because we set both $f(t)$ and $g(t)$ to zero for $t < 0$:
$$\begin{aligned}
\hat{\mathcal{L}}\{(f * g)(t)\}
@@ -98,6 +110,8 @@ $$\begin{aligned}
&= \int_0^\infty \tilde{f}(s) g(t') \exp(- s t') \dd{t'}
= \tilde{f}(s) \: \tilde{g}(s)
\end{aligned}$$
+</div>
+</div>
diff --git a/content/know/concept/curvilinear-coordinates/index.pdc b/content/know/concept/curvilinear-coordinates/index.pdc
index e1c0465..925eda3 100644
--- a/content/know/concept/curvilinear-coordinates/index.pdc
+++ b/content/know/concept/curvilinear-coordinates/index.pdc
@@ -50,7 +50,7 @@ and [parabolic cylindrical coordinates](/know/concept/parabolic-cylindrical-coor
In the following subsections,
we derive general formulae to convert expressions
-from Cartesian coordinates in the new orthogonal system $(x_1, x_2, x_3)$.
+from Cartesian coordinates to the new orthogonal system $(x_1, x_2, x_3)$.
## Basis vectors
@@ -93,7 +93,26 @@ $$\begin{aligned}
## Gradient
-For a given direction $\dd{\ell}$, we know that
+In an orthogonal coordinate system,
+the gradient $\nabla f$ of a scalar $f$ is as follows,
+where $\vu{e}_1$, $\vu{e}_2$ and $\vu{e}_3$
+are the basis unit vectors respectively corresponding to $x_1$, $x_2$ and $x_3$:
+
+$$\begin{gathered}
+ \boxed{
+ \nabla f
+ = \vu{e}_1 \frac{1}{h_1} \pdv{f}{x_1}
+ + \vu{e}_2 \frac{1}{h_2} \pdv{f}{x_2}
+ + \vu{e}_3 \frac{1}{h_3} \pdv{f}{x_3}
+ }
+\end{gathered}$$
+
+<div class="accordion">
+<input type="checkbox" id="proof-grad"/>
+<label for="proof-grad">Proof</label>
+<div class="hidden">
+<label for="proof-grad">Proof.</label>
+For a direction $\dd{\ell}$, we know that
$\dv*{f}{\ell}$ is the component of $\nabla f$ in that direction:
$$\begin{aligned}
@@ -104,7 +123,7 @@ $$\begin{aligned}
\end{aligned}$$
Where $\vu{u}$ is simply a unit vector in the direction of $\dd{\ell}$.
-We can thus find an expression for the gradient $\nabla f$
+We thus find the expression for the gradient $\nabla f$
by choosing $\dd{\ell}$ to be $h_1 \dd{x_1}$, $h_2 \dd{x_2}$ and $h_3 \dd{x_3}$ in turn:
$$\begin{gathered}
@@ -112,49 +131,59 @@ $$\begin{gathered}
= \vu{e}_1 \dv{x_1}{\ell} \pdv{f}{x_1}
+ \vu{e}_2 \dv{x_2}{\ell} \pdv{f}{x_2}
+ \vu{e}_3 \dv{x_3}{\ell} \pdv{f}{x_3}
- \\
- \boxed{
- \nabla f
- = \vu{e}_1 \frac{1}{h_1} \pdv{f}{x_1}
- + \vu{e}_2 \frac{1}{h_2} \pdv{f}{x_2}
- + \vu{e}_3 \frac{1}{h_3} \pdv{f}{x_3}
- }
\end{gathered}$$
-
-Where $\vu{e}_1$, $\vu{e}_2$ and $\vu{e}_3$
-are the basis unit vectors respectively corresponding to $x_1$, $x_2$ and $x_3$.
+</div>
+</div>
## Divergence
-Consider a vector $\vb{V}$ in the target coordinate system
-with components $V_1$, $V_2$ and $V_3$:
+The divergence of a vector $\vb{V} = \vu{e}_1 V_1 + \vu{e}_2 V_2 + \vu{e}_3 V_3$
+in an orthogonal system is given by:
+
+$$\begin{aligned}
+ \boxed{
+ \nabla \cdot \vb{V}
+ = \frac{1}{h_1 h_2 h_3}
+ \Big( \pdv{(h_2 h_3 V_1)}{x_1} + \pdv{(h_1 h_3 V_2)}{x_2} + \pdv{(h_1 h_2 V_3)}{x_3} \Big)
+ }
+\end{aligned}$$
+
+<div class="accordion">
+<input type="checkbox" id="proof-div"/>
+<label for="proof-div">Proof</label>
+<div class="hidden">
+<label for="proof-div">Proof.</label>
+As preparation, we rewrite $\vb{V}$ as follows
+to introduce the scale factors:
$$\begin{aligned}
\vb{V}
- &= \vu{e}_1 V_1 + \vu{e}_2 V_2 + \vu{e}_3 V_3
- \\
&= \vu{e}_1 \frac{1}{h_2 h_3} (h_2 h_3 V_1)
+ \vu{e}_2 \frac{1}{h_1 h_3} (h_1 h_3 V_2)
+ \vu{e}_3 \frac{1}{h_1 h_2} (h_1 h_2 V_3)
\end{aligned}$$
-We take only the $\vu{e}_1$-component of this vector,
-and expand its divergence using a vector identity,
-where $f = h_2 h_3 V_1$ is a scalar
-and $\vb{U} = \vu{e}_1 / (h_2 h_3)$ is a vector:
+We start by taking only the $\vu{e}_1$-component of this vector,
+and expand its divergence using the following vector identity:
$$\begin{gathered}
\nabla \cdot (\vb{U} \: f)
= \vb{U} \cdot (\nabla f) + (\nabla \cdot \vb{U}) \: f
- \\
+\end{gathered}$$
+
+Inserting the scalar $f = h_2 h_3 V_1$
+the vector $\vb{U} = \vu{e}_1 / (h_2 h_3)$,
+we arrive at:
+
+$$\begin{gathered}
\nabla \cdot \Big( \frac{\vu{e}_1}{h_2 h_3} (h_2 h_3 V_1) \Big)
= \frac{\vu{e}_1}{h_2 h_3} \cdot \Big( \nabla (h_2 h_3 V_1) \Big)
+ \Big( \nabla \cdot \frac{\vu{e}_1}{h_2 h_3} \Big) (h_2 h_3 V_1)
\end{gathered}$$
-The first term is straightforward to calculate
-thanks to our preceding expression for the gradient.
+The first right-hand term is easy to calculate
+thanks to our expression for the gradient $\nabla f$.
Only the $\vu{e}_1$-component survives due to the dot product:
$$\begin{aligned}
@@ -162,8 +191,8 @@ $$\begin{aligned}
= \frac{\vu{e}_1}{h_1 h_2 h_3} \pdv{(h_2 h_3 V_1)}{x_1}
\end{aligned}$$
-The second term is a bit more involved.
-To begin with, we use the gradient formula to note that:
+The second term is more involved.
+First, we use the gradient formula to observe that:
$$\begin{aligned}
\nabla x_1
@@ -177,7 +206,7 @@ $$\begin{aligned}
\end{aligned}$$
Because $\vu{e}_2 \cross \vu{e}_3 = \vu{e}_1$ in an orthogonal basis,
-we can get the vector whose divergence we want:
+these gradients can be used to express the vector whose divergence we want:
$$\begin{aligned}
\nabla x_2 \cross \nabla x_3
@@ -196,15 +225,9 @@ $$\begin{aligned}
\end{aligned}$$
After repeating this procedure for the other components of $\vb{V}$,
-we arrive at the following general expression for the divergence $\nabla \cdot \vb{V}$:
-
-$$\begin{aligned}
- \boxed{
- \nabla \cdot \vb{V}
- = \frac{1}{h_1 h_2 h_3}
- \Big( \pdv{(h_2 h_3 V_1)}{x_1} + \pdv{(h_1 h_3 V_2)}{x_2} + \pdv{(h_1 h_2 V_3)}{x_3} \Big)
- }
-\end{aligned}$$
+we get the desired general expression for the divergence.
+</div>
+</div>
## Laplacian
@@ -229,31 +252,55 @@ $$\begin{aligned}
## Curl
-We find the curl in a similar way as the divergence.
-Consider an arbitrary vector $\vb{V}$:
+The curl of a vector $\vb{V}$ is as follows
+in a general orthogonal curvilinear system:
+
+$$\begin{aligned}
+ \boxed{
+ \begin{aligned}
+ \nabla \times \vb{V}
+ &= \frac{\vu{e}_1}{h_2 h_3} \Big( \pdv{(h_3 V_3)}{x_2} - \pdv{(h_2 V_2)}{x_3} \Big)
+ \\
+ &+ \frac{\vu{e}_2}{h_1 h_3} \Big( \pdv{(h_1 V_1)}{x_3} - \pdv{(h_3 V_3)}{x_1} \Big)
+ \\
+ &+ \frac{\vu{e}_3}{h_1 h_2} \Big( \pdv{(h_2 V_2)}{x_1} - \pdv{(h_1 V_1)}{x_2} \Big)
+ \end{aligned}
+ }
+\end{aligned}$$
+
+<div class="accordion">
+<input type="checkbox" id="proof-curl"/>
+<label for="proof-curl">Proof</label>
+<div class="hidden">
+<label for="proof-curl">Proof.</label>
+The curl is found in a similar way as the divergence.
+We rewrite $\vb{V}$ like so:
$$\begin{aligned}
\vb{V}
- = \vu{e}_1 V_1 + \vu{e}_2 V_2 + \vu{e}_3 V_3
= \frac{\vu{e}_1}{h_1} (h_1 V_1) + \frac{\vu{e}_2}{h_2} (h_2 V_2) + \frac{\vu{e}_3}{h_3} (h_3 V_3)
\end{aligned}$$
-We expand the curl of its $\vu{e}_1$-component using a vector identity,
-where $f = h_1 V_1$ is a scalar and $\vb{U} = \vu{e}_1 / h_1$ is a vector:
+We expand the curl of its $\vu{e}_1$-component using the following vector identity:
$$\begin{gathered}
\nabla \cross (\vb{U} \: f)
= (\nabla \cross \vb{U}) \: f - \vb{U} \cross (\nabla f)
- \\
+\end{gathered}$$
+
+Inserting the scalar $f = h_1 V_1$
+and the vector $\vb{U} = \vu{e}_1 / h_1$, we arrive at:
+
+$$\begin{gathered}
\nabla \cross \Big( \frac{\vu{e}_1}{h_1} (h_1 V_1) \Big)
= \Big( \nabla \cross \frac{\vu{e}_1}{h_1} \Big) (h_1 V_1) - \frac{\vu{e}_1}{h_1} \cross \Big( \nabla (h_1 V_1) \Big)
\end{gathered}$$
-Previously, when calculating the divergence,
+Previously, when proving the divergence,
we already showed that $\vu{e}_1 / h_1 = \nabla x_1$.
Because the curl of a gradient is zero,
-the first term thus disappears, leaving only the second,
-which contains a gradient turning out to be:
+the first term disappears, leaving only the second,
+which contains a gradient that turns out to be:
$$\begin{aligned}
\nabla (h_1 V_1)
@@ -273,20 +320,9 @@ $$\begin{aligned}
\end{aligned}$$
If we go through the same process for the other components of $\vb{V}$
-and add the results together, we get the following expression for the curl $\nabla \cross \vb{V}$:
-
-$$\begin{aligned}
- \boxed{
- \begin{aligned}
- \nabla \times \vb{V}
- &= \frac{\vu{e}_1}{h_2 h_3} \Big( \pdv{(h_3 V_3)}{x_2} - \pdv{(h_2 V_2)}{x_3} \Big)
- \\
- &+ \frac{\vu{e}_2}{h_1 h_3} \Big( \pdv{(h_1 V_1)}{x_3} - \pdv{(h_3 V_3)}{x_1} \Big)
- \\
- &+ \frac{\vu{e}_3}{h_1 h_2} \Big( \pdv{(h_2 V_2)}{x_1} - \pdv{(h_1 V_1)}{x_2} \Big)
- \end{aligned}
- }
-\end{aligned}$$
+and add up the results, we get the desired expression for the curl.
+</div>
+</div>
## Differential elements
diff --git a/content/know/concept/dirac-delta-function/index.pdc b/content/know/concept/dirac-delta-function/index.pdc
index 76b6e97..9eecefd 100644
--- a/content/know/concept/dirac-delta-function/index.pdc
+++ b/content/know/concept/dirac-delta-function/index.pdc
@@ -21,7 +21,7 @@ defined to be 1:
$$\begin{aligned}
\boxed{
- \delta(x) =
+ \delta(x) \equiv
\begin{cases}
+\infty & \mathrm{if}\: x = 0 \\
0 & \mathrm{if}\: x \neq 0
@@ -56,12 +56,10 @@ following integral, which appears very often in the context of
[Fourier transforms](/know/concept/fourier-transform/):
$$\begin{aligned}
- \boxed{
- \delta(x)
- %= \lim_{n \to +\infty} \!\Big\{\frac{\sin(n x)}{\pi x}\Big\}
- = \frac{1}{2\pi} \int_{-\infty}^\infty \exp(i k x) \dd{k}
- \:\:\propto\:\: \hat{\mathcal{F}}\{1\}
- }
+ \delta(x)
+ = \lim_{n \to +\infty} \!\Big\{\frac{\sin(n x)}{\pi x}\Big\}
+ = \frac{1}{2\pi} \int_{-\infty}^\infty \exp(i k x) \dd{k}
+ \:\:\propto\:\: \hat{\mathcal{F}}\{1\}
\end{aligned}$$
When the argument of $\delta(x)$ is scaled, the delta function is itself scaled:
@@ -72,18 +70,22 @@ $$\begin{aligned}
}
\end{aligned}$$
-*__Proof.__ Because it is symmetric, $\delta(s x) = \delta(|s| x)$. Then by
-substituting $\sigma = |s| x$:*
+<div class="accordion">
+<input type="checkbox" id="proof-scale"/>
+<label for="proof-scale">Proof</label>
+<div class="hidden">
+<label for="proof-scale">Proof.</label>
+Because it is symmetric, $\delta(s x) = \delta(|s| x)$.
+Then by substituting $\sigma = |s| x$:
$$\begin{aligned}
\int \delta(|s| x) \dd{x}
&= \frac{1}{|s|} \int \delta(\sigma) \dd{\sigma} = \frac{1}{|s|}
\end{aligned}$$
+</div>
+</div>
-*__Q.E.D.__*
-
-An even more impressive property is the behaviour of the derivative of
-$\delta(x)$:
+An even more impressive property is the behaviour of the derivative of $\delta(x)$:
$$\begin{aligned}
\boxed{
@@ -91,16 +93,21 @@ $$\begin{aligned}
}
\end{aligned}$$
-*__Proof.__ Note which variable is used for the
-differentiation, and that $\delta'(x - \xi) = - \delta'(\xi - x)$:*
+<div class="accordion">
+<input type="checkbox" id="proof-dv1"/>
+<label for="proof-dv1">Proof</label>
+<div class="hidden">
+<label for="proof-dv1">Proof.</label>
+Note which variable is used for the
+differentiation, and that $\delta'(x - \xi) = - \delta'(\xi - x)$:
$$\begin{aligned}
\int f(\xi) \: \dv{\delta(x - \xi)}{x} \dd{\xi}
&= \dv{x} \int f(\xi) \: \delta(x - \xi) \dd{x}
= f'(x)
\end{aligned}$$
-
-*__Q.E.D.__*
+</div>
+</div>
This property also generalizes nicely for the higher-order derivatives:
diff --git a/content/know/concept/heaviside-step-function/index.pdc b/content/know/concept/heaviside-step-function/index.pdc
index 0471acf..dbbca6f 100644
--- a/content/know/concept/heaviside-step-function/index.pdc
+++ b/content/know/concept/heaviside-step-function/index.pdc
@@ -50,7 +50,23 @@ $$\begin{aligned}
\end{aligned}$$
The [Fourier transform](/know/concept/fourier-transform/)
-of $\Theta(t)$ is noteworthy.
+of $\Theta(t)$ is as follows,
+where $\pv{}$ is the Cauchy principal value,
+$A$ and $s$ are constants from the FT's definition,
+and $\mathrm{sgn}$ is the signum function:
+
+$$\begin{aligned}
+ \boxed{
+ \tilde{\Theta}(\omega)
+ = \frac{A}{|s|} \Big( \pi \delta(\omega) + i \: \mathrm{sgn}(s) \pv{\frac{1}{\omega}} \Big)
+ }
+\end{aligned}$$
+
+<div class="accordion">
+<input type="checkbox" id="proof-fourier"/>
+<label for="proof-fourier">Proof</label>
+<div class="hidden">
+<label for="proof-fourier">Proof.</label>
In this case, it is easiest to use $\Theta(0) = 1/2$,
such that the Heaviside step function can be expressed
using the signum function $\mathrm{sgn}(t)$:
@@ -77,15 +93,10 @@ $$\begin{aligned}
&= A \pi \delta(s \omega) + \frac{A}{2} \pv{\int_{-\infty}^\infty \mathrm{sgn}(t) \exp(i s \omega t) \dd{t}}
= \frac{A}{|s|} \pi \delta(\omega) + i \frac{A}{s} \pv{\frac{1}{\omega}}
\end{aligned}$$
+</div>
+</div>
The use of $\pv{}$ without an integral is an abuse of notation,
and means that this result only makes sense when wrapped in an integral.
Formally, $\pv{\{1 / \omega\}}$ is a [Schwartz distribution](/know/concept/schwartz-distribution/).
-We thus have:
-$$\begin{aligned}
- \boxed{
- \tilde{\Theta}(\omega)
- = \frac{A}{|s|} \Big( \pi \delta(\omega) + i \: \mathrm{sgn}(s) \pv{\frac{1}{\omega}} \Big)
- }
-\end{aligned}$$
diff --git a/content/know/concept/holomorphic-function/index.pdc b/content/know/concept/holomorphic-function/index.pdc
index 1077060..1c2f092 100644
--- a/content/know/concept/holomorphic-function/index.pdc
+++ b/content/know/concept/holomorphic-function/index.pdc
@@ -77,8 +77,12 @@ $$\begin{aligned}
}
\end{aligned}$$
-*__Proof__*.
-*Just like before, we decompose $f(z)$ into its real and imaginary parts:*
+<div class="accordion">
+<input type="checkbox" id="proof-int-theorem"/>
+<label for="proof-int-theorem">Proof</label>
+<div class="hidden">
+<label for="proof-int-theorem">Proof.</label>
+Just like before, we decompose $f(z)$ into its real and imaginary parts:
$$\begin{aligned}
\oint_C f(z) \:dz
@@ -88,16 +92,17 @@ $$\begin{aligned}
&= \oint_C u \dd{x} - v \dd{y} + i \oint_C v \dd{x} + u \dd{y}
\end{aligned}$$
-*Using Green's theorem, we integrate over the area $A$ enclosed by $C$:*
+Using Green's theorem, we integrate over the area $A$ enclosed by $C$:
$$\begin{aligned}
\oint_C f(z) \:dz
&= - \iint_A \pdv{v}{x} + \pdv{u}{y} \dd{x} \dd{y} + i \iint_A \pdv{u}{x} - \pdv{v}{y} \dd{x} \dd{y}
\end{aligned}$$
-*Since $f(z)$ is holomorphic, $u$ and $v$ satisfy the Cauchy-Riemann
-equations, such that the integrands disappear and the final result is zero.*
-*__Q.E.D.__*
+Since $f(z)$ is holomorphic, $u$ and $v$ satisfy the Cauchy-Riemann
+equations, such that the integrands disappear and the final result is zero.
+</div>
+</div>
An interesting consequence is **Cauchy's integral formula**, which
states that the value of $f(z)$ at an arbitrary point $z_0$ is
@@ -109,11 +114,15 @@ $$\begin{aligned}
}
\end{aligned}$$
-*__Proof__*.
-*Thanks to the integral theorem, we know that the shape and size
+<div class="accordion">
+<input type="checkbox" id="proof-int-formula"/>
+<label for="proof-int-formula">Proof</label>
+<div class="hidden">
+<label for="proof-int-formula">Proof.</label>
+Thanks to the integral theorem, we know that the shape and size
of $C$ is irrelevant. Therefore we choose it to be a circle with radius $r$,
such that the integration variable becomes $z = z_0 + r e^{i \theta}$. Then
-we integrate by substitution:*
+we integrate by substitution:
$$\begin{aligned}
\frac{1}{2 \pi i} \oint_C \frac{f(z)}{z - z_0} \dd{z}
@@ -121,15 +130,15 @@ $$\begin{aligned}
= \frac{1}{2 \pi} \int_0^{2 \pi} f(z_0 + r e^{i \theta}) \dd{\theta}
\end{aligned}$$
-*We may choose an arbitrarily small radius $r$, such that the contour approaches $z_0$:*
+We may choose an arbitrarily small radius $r$, such that the contour approaches $z_0$:
$$\begin{aligned}
\lim_{r \to 0}\:\: \frac{1}{2 \pi} \int_0^{2 \pi} f(z_0 + r e^{i \theta}) \dd{\theta}
&= \frac{f(z_0)}{2 \pi} \int_0^{2 \pi} \dd{\theta}
= f(z_0)
\end{aligned}$$
-
-*__Q.E.D.__*
+</div>
+</div>
Similarly, **Cauchy's differentiation formula**,
or **Cauchy's integral formula for derivatives**
@@ -143,16 +152,20 @@ $$\begin{aligned}
}
\end{aligned}$$
-*__Proof__*.
-*By definition, the first derivative $f'(z)$ of a
-holomorphic function $f(z)$ exists and is given by:*
+<div class="accordion">
+<input type="checkbox" id="proof-diff-formula"/>
+<label for="proof-diff-formula">Proof</label>
+<div class="hidden">
+<label for="proof-diff-formula">Proof.</label>
+By definition, the first derivative $f'(z)$ of a
+holomorphic function exists and is:
$$\begin{aligned}
f'(z_0)
= \lim_{z \to z_0} \frac{f(z) - f(z_0)}{z - z_0}
\end{aligned}$$
-*We evaluate the numerator using Cauchy's integral theorem as follows:*
+We evaluate the numerator using Cauchy's integral theorem as follows:
$$\begin{aligned}
f'(z_0)
@@ -166,7 +179,7 @@ $$\begin{aligned}
\oint_C \frac{f(\zeta) (z - z_0)}{(\zeta - z)(\zeta - z_0)} \dd{\zeta}
\end{aligned}$$
-*This contour integral converges uniformly, so we may apply the limit on the inside:*
+This contour integral converges uniformly, so we may apply the limit on the inside:
$$\begin{aligned}
f'(z_0)
@@ -174,9 +187,10 @@ $$\begin{aligned}
= \frac{1}{2 \pi i} \oint_C \frac{f(\zeta)}{(\zeta - z_0)^2} \dd{\zeta}
\end{aligned}$$
-*Since the second-order derivative $f''(z)$ is simply the derivative of $f'(z)$,
-this proof works inductively for all higher orders $n$.*
-*__Q.E.D.__*
+Since the second-order derivative $f''(z)$ is simply the derivative of $f'(z)$,
+this proof works inductively for all higher orders $n$.
+</div>
+</div>
## Residue theorem
@@ -205,24 +219,29 @@ $$\begin{aligned}
}
\end{aligned}$$
-*__Proof__*. *From the definition of a meromorphic function,
+<div class="accordion">
+<input type="checkbox" id="proof-res-theorem"/>
+<label for="proof-res-theorem">Proof</label>
+<div class="hidden">
+<label for="proof-res-theorem">Proof.</label>
+From the definition of a meromorphic function,
we know that we can decompose $f(z)$ like so,
-where $h(z)$ is holomorphic and $p$ are all its poles:*
+where $h(z)$ is holomorphic and $p$ are all its poles:
$$\begin{aligned}
f(z) = h(z) + \sum_{p} \frac{R_p}{z - z_p}
\end{aligned}$$
-*We integrate this over a contour $C$ which contains all poles, and apply
-both Cauchy's integral theorem and Cauchy's integral formula to get:*
+We integrate this over a contour $C$ which contains all poles, and apply
+both Cauchy's integral theorem and Cauchy's integral formula to get:
$$\begin{aligned}
\oint_C f(z) \dd{z}
&= \oint_C h(z) \dd{z} + \sum_{p} R_p \oint_C \frac{1}{z - z_p} \dd{z}
= \sum_{p} R_p \: 2 \pi i
\end{aligned}$$
-
-*__Q.E.D.__*
+</div>
+</div>
This theorem might not seem very useful,
but in fact, thanks to some clever mathematical magic,
diff --git a/content/know/concept/parsevals-theorem/index.pdc b/content/know/concept/parsevals-theorem/index.pdc
index 824afa6..9f440f2 100644
--- a/content/know/concept/parsevals-theorem/index.pdc
+++ b/content/know/concept/parsevals-theorem/index.pdc
@@ -17,24 +17,24 @@ markup: pandoc
and the inner product of their [Fourier transforms](/know/concept/fourier-transform/)
$\tilde{f}(k)$ and $\tilde{g}(k)$.
There are two equivalent ways of stating it,
-where $A$, $B$, and $s$ are constants from the Fourier transform's definition:
+where $A$, $B$, and $s$ are constants from the FT's definition:
$$\begin{aligned}
\boxed{
- \braket{f(x)}{g(x)} = \frac{2 \pi B^2}{|s|} \braket*{\tilde{f}(k)}{\tilde{g}(k)}
- }
- \\
- \boxed{
- \braket*{\tilde{f}(k)}{\tilde{g}(k)} = \frac{2 \pi A^2}{|s|} \braket{f(x)}{g(x)}
+ \begin{aligned}
+ \braket{f(x)}{g(x)} &= \frac{2 \pi B^2}{|s|} \braket*{\tilde{f}(k)}{\tilde{g}(k)}
+ \\
+ \braket*{\tilde{f}(k)}{\tilde{g}(k)} &= \frac{2 \pi A^2}{|s|} \braket{f(x)}{g(x)}
+ \end{aligned}
}
\end{aligned}$$
-For this reason, physicists like to define the Fourier transform
-with $A\!=\!B\!=\!1 / \sqrt{2\pi}$ and $|s|\!=\!1$, because then it nicely
-conserves the functions' normalization.
-
-To prove the theorem, we insert the inverse FT into the inner product
-definition:
+<div class="accordion">
+<input type="checkbox" id="proof-fourier"/>
+<label for="proof-fourier">Proof</label>
+<div class="hidden">
+<label for="proof-fourier">Proof.</label>
+We insert the inverse FT into the defintion of the inner product:
$$\begin{aligned}
\braket{f}{g}
@@ -54,7 +54,7 @@ $$\begin{aligned}
\end{aligned}$$
Where $\delta(k)$ is the [Dirac delta function](/know/concept/dirac-delta-function/).
-Note that we can equally well do the proof in the opposite direction,
+Note that we can equally well do this proof in the opposite direction,
which yields an equivalent result:
$$\begin{aligned}
@@ -73,6 +73,12 @@ $$\begin{aligned}
&= \frac{2 \pi A^2}{|s|} \int_{-\infty}^\infty f^*(x) \: g(x) \dd{x}
= \frac{2 \pi A^2}{|s|} \braket{f}{g}
\end{aligned}$$
+</div>
+</div>
+
+For this reason, physicists like to define the Fourier transform
+with $A\!=\!B\!=\!1 / \sqrt{2\pi}$ and $|s|\!=\!1$, because then it nicely
+conserves the functions' normalization.