summaryrefslogtreecommitdiff
path: root/content/know/concept
diff options
context:
space:
mode:
Diffstat (limited to 'content/know/concept')
-rw-r--r--content/know/concept/_index.md8
-rw-r--r--content/know/concept/blochs-theorem/index.pdc115
-rw-r--r--content/know/concept/convolution-theorem/index.pdc100
-rw-r--r--content/know/concept/dirac-delta-function/index.pdc109
-rw-r--r--content/know/concept/dirac-notation/index.pdc129
-rw-r--r--content/know/concept/fourier-transform/index.pdc117
-rw-r--r--content/know/concept/gram-schmidt-method/index.pdc47
-rw-r--r--content/know/concept/hilbert-space/index.pdc202
-rw-r--r--content/know/concept/legendre-transform/index.pdc89
-rw-r--r--content/know/concept/parsevals-theorem/index.pdc76
-rw-r--r--content/know/concept/partial-fraction-decomposition/index.pdc60
-rw-r--r--content/know/concept/pauli-exclusion-principle/index.pdc125
-rw-r--r--content/know/concept/probability-current/index.pdc98
-rw-r--r--content/know/concept/slater-determinant/index.pdc54
-rw-r--r--content/know/concept/sturm-liouville-theory/index.pdc346
-rw-r--r--content/know/concept/time-independent-perturbation-theory/index.pdc329
-rw-r--r--content/know/concept/wentzel-kramers-brillouin-approximation/index.pdc198
17 files changed, 2202 insertions, 0 deletions
diff --git a/content/know/concept/_index.md b/content/know/concept/_index.md
new file mode 100644
index 0000000..956724a
--- /dev/null
+++ b/content/know/concept/_index.md
@@ -0,0 +1,8 @@
+---
+title: "List of concepts"
+date: 2021-02-22T20:38:58+01:00
+draft: false
+layout: "know-list"
+---
+
+This is an alphabetical list of the concepts in this knowledge base.
diff --git a/content/know/concept/blochs-theorem/index.pdc b/content/know/concept/blochs-theorem/index.pdc
new file mode 100644
index 0000000..1828d8a
--- /dev/null
+++ b/content/know/concept/blochs-theorem/index.pdc
@@ -0,0 +1,115 @@
+---
+title: "Bloch's theorem"
+firstLetter: "B"
+publishDate: 2021-02-22
+categories:
+- Quantum mechanics
+
+date: 2021-02-22T20:02:14+01:00
+draft: false
+markup: pandoc
+---
+
+# Bloch's theorem
+In quantum mechanics, **Bloch's theorem** states that,
+given a potential $V(\vec{r})$ which is periodic on a lattice,
+i.e. $V(\vec{r}) = V(\vec{r} + \vec{a})$
+for a primitive lattice vector $\vec{a}$,
+then it follows that the solutions $\psi(\vec{r})$
+to the time-independent Schrödinger equation
+take the following form,
+where the function $u(\vec{r})$ is periodic on the same lattice,
+i.e. $u(\vec{r}) = u(\vec{r} + \vec{a})$:
+
+$$
+\begin{aligned}
+ \boxed{
+ \psi(\vec{r}) = u(\vec{r}) e^{i \vec{k} \cdot \vec{r}}
+ }
+\end{aligned}
+$$
+
+In other words, in a periodic potential,
+the solutions are simply plane waves with a periodic modulation,
+known as **Bloch functions** or **Bloch states**.
+
+This is suprisingly easy to prove:
+if the Hamiltonian $\hat{H}$ is lattice-periodic,
+then both $\psi(\vec{r})$ and $\psi(\vec{r} + \vec{a})$
+are eigenstates with the same energy:
+
+$$
+\begin{aligned}
+ \hat{H} \psi(\vec{r}) = E \psi(\vec{r})
+ \qquad
+ \hat{H} \psi(\vec{r} + \vec{a}) = E \psi(\vec{r} + \vec{a})
+\end{aligned}
+$$
+
+Now define the unitary translation operator $\hat{T}(\vec{a})$ such that
+$\psi(\vec{r} + \vec{a}) = \hat{T}(\vec{a}) \psi(\vec{r})$.
+From the previous equation, we then know that:
+
+$$
+\begin{aligned}
+ \hat{H} \hat{T}(\vec{a}) \psi(\vec{r})
+ = E \hat{T}(\vec{a}) \psi(\vec{r})
+ = \hat{T}(\vec{a}) \big(E \psi(\vec{r})\big)
+ = \hat{T}(\vec{a}) \hat{H} \psi(\vec{r})
+\end{aligned}
+$$
+
+In other words, if $\hat{H}$ is lattice-periodic,
+then it will commute with $\hat{T}(\vec{a})$,
+i.e. $[\hat{H}, \hat{T}(\vec{a})] = 0$.
+Consequently, $\hat{H}$ and $\hat{T}(\vec{a})$ must share eigenstates $\psi(\vec{r})$:
+
+$$
+\begin{aligned}
+ \hat{H} \:\psi(\vec{r}) = E \:\psi(\vec{r})
+ \qquad
+ \hat{T}(\vec{a}) \:\psi(\vec{r}) = \tau \:\psi(\vec{r})
+\end{aligned}
+$$
+
+Since $\hat{T}$ is unitary,
+its eigenvalues $\tau$ must have the form $e^{i \theta}$, with $\theta$ real.
+Therefore a translation by $\vec{a}$ causes a phase shift,
+for some vector $\vec{k}$:
+
+$$
+\begin{aligned}
+ \psi(\vec{r} + \vec{a})
+ = \hat{T}(\vec{a}) \:\psi(\vec{r})
+ = e^{i \theta} \:\psi(\vec{r})
+ = e^{i \vec{k} \cdot \vec{a}} \:\psi(\vec{r})
+\end{aligned}
+$$
+
+Let us now define the following function,
+keeping our arbitrary choice of $\vec{k}$:
+
+$$
+\begin{aligned}
+ u(\vec{r})
+ = e^{- i \vec{k} \cdot \vec{r}} \:\psi(\vec{r})
+\end{aligned}
+$$
+
+As it turns out, this function is guaranteed to be lattice-periodic for any $\vec{k}$:
+
+$$
+\begin{aligned}
+ u(\vec{r} + \vec{a})
+ &= e^{- i \vec{k} \cdot (\vec{r} + \vec{a})} \:\psi(\vec{r} + \vec{a})
+ \\
+ &= e^{- i \vec{k} \cdot \vec{r}} e^{- i \vec{k} \cdot \vec{a}} e^{i \vec{k} \cdot \vec{a}} \:\psi(\vec{r})
+ \\
+ &= e^{- i \vec{k} \cdot \vec{r}} \:\psi(\vec{r})
+ \\
+ &= u(\vec{r})
+\end{aligned}
+$$
+
+Then Bloch's theorem follows from
+isolating the definition of $u(\vec{r})$ for $\psi(\vec{r})$.
diff --git a/content/know/concept/convolution-theorem/index.pdc b/content/know/concept/convolution-theorem/index.pdc
new file mode 100644
index 0000000..fc96f30
--- /dev/null
+++ b/content/know/concept/convolution-theorem/index.pdc
@@ -0,0 +1,100 @@
+---
+title: "Convolution theorem"
+firstLetter: "C"
+publishDate: 2021-02-22
+categories:
+- Mathematics
+
+date: 2021-02-22T21:35:23+01:00
+draft: false
+markup: pandoc
+---
+
+# Convolution theorem
+
+The **convolution theorem** states that a convolution in the direct domain
+is equal to a product in the frequency domain. This is especially useful
+for computation, replacing an $\mathcal{O}(n^2)$ convolution with an
+$\mathcal{O}(n \log(n))$ transform and product.
+
+## Fourier transform
+
+The convolution theorem is usually expressed as follows, where
+$\hat{\mathcal{F}}$ is the [Fourier transform](/know/concept/fourier-transform/),
+and $A$ and $B$ are constants from its definition:
+
+$$\begin{aligned}
+ \boxed{
+ \begin{aligned}
+ A \cdot (f * g)(x) &= \hat{\mathcal{F}}^{-1}\{\tilde{f}(k) \: \tilde{g}(k)\} \\
+ B \cdot (\tilde{f} * \tilde{g})(k) &= \hat{\mathcal{F}}\{f(x) \: g(x)\}
+ \end{aligned}
+ }
+\end{aligned}$$
+
+To prove this, we expand the right-hand side of the theorem and
+rearrange the integrals:
+
+$$\begin{aligned}
+ \hat{\mathcal{F}}^{-1}\{\tilde{f}(k) \: \tilde{g}(k)\}
+ &= B \int_{-\infty}^\infty \tilde{f}(k) \Big( A \int_{-\infty}^\infty g(x') \exp(i s k x') \dd{x'} \Big) \exp(-i s k x) \dd{k}
+ \\
+ &= A \int_{-\infty}^\infty g(x') \Big( B \int_{-\infty}^\infty \tilde{f}(k) \exp(- i s k (x - x')) \dd{k} \Big) \dd{x'}
+ \\
+ &= A \int_{-\infty}^\infty g(x') f(x - x') \dd{x'}
+ = A \cdot (f * g)(x)
+\end{aligned}$$
+
+Then we do the same thing again, this time starting from a product in
+the $x$-domain:
+
+$$\begin{aligned}
+ \hat{\mathcal{F}}\{f(x) \: g(x)\}
+ &= A \int_{-\infty}^\infty f(x) \Big( B \int_{-\infty}^\infty \tilde{g}(k') \exp(- i s x k') \dd{k'} \Big) \exp(i s k x) \dd{x}
+ \\
+ &= B \int_{-\infty}^\infty \tilde{g}(k') \Big( A \int_{-\infty}^\infty f(x) \exp(i s x (k - k')) \dd{x} \Big) \dd{k'}
+ \\
+ &= B \int_{-\infty}^\infty \tilde{g}(k') \tilde{f}(k - k') \dd{k'}
+ = B \cdot (\tilde{f} * \tilde{g})(k)
+\end{aligned}$$
+
+
+## Laplace transform
+
+For functions $f(t)$ and $g(t)$ which are only defined for $t \ge 0$,
+the convolution theorem can also be stated using the Laplace transform:
+
+$$\begin{aligned}
+ \boxed{(f * g)(t) = \hat{\mathcal{L}}^{-1}\{\tilde{f}(s) \: \tilde{g}(s)\}}
+\end{aligned}$$
+
+Because the inverse Laplace transform $\hat{\mathcal{L}}^{-1}$ is quite
+unpleasant, the theorem is often stated using the forward transform
+instead:
+
+$$\begin{aligned}
+ \boxed{\hat{\mathcal{L}}\{(f * g)(t)\} = \tilde{f}(s) \: \tilde{g}(s)}
+\end{aligned}$$
+
+We prove this by expanding the left-hand side. Note that the lower
+integration limit is 0 instead of $-\infty$, because we set both $f(t)$
+and $g(t)$ to zero for $t < 0$:
+
+$$\begin{aligned}
+ \hat{\mathcal{L}}\{(f * g)(t)\}
+ &= \int_0^\infty \Big( \int_0^\infty g(t') f(t - t') \dd{t'} \Big) \exp(- s t) \dd{t}
+ \\
+ &= \int_0^\infty \Big( \int_0^\infty f(t - t') \exp(- s t) \dd{t} \Big) g(t') \dd{t'}
+\end{aligned}$$
+
+Then we define a new integration variable $\tau = t - t'$, yielding:
+
+$$\begin{aligned}
+ \hat{\mathcal{L}}\{(f * g)(t)\}
+ &= \int_0^\infty \Big( \int_0^\infty f(\tau) \exp(- s (\tau + t')) \dd{\tau} \Big) g(t') \dd{t'}
+ \\
+ &= \int_0^\infty \Big( \int_0^\infty f(\tau) \exp(- s \tau) \dd{\tau} \Big) g(t') \exp(- s t') \dd{t'}
+ \\
+ &= \int_0^\infty \tilde{f}(s) g(t') \exp(- s t') \dd{t'}
+ = \tilde{f}(s) \: \tilde{g}(s)
+\end{aligned}$$
diff --git a/content/know/concept/dirac-delta-function/index.pdc b/content/know/concept/dirac-delta-function/index.pdc
new file mode 100644
index 0000000..3982afc
--- /dev/null
+++ b/content/know/concept/dirac-delta-function/index.pdc
@@ -0,0 +1,109 @@
+---
+title: "Dirac delta function"
+firstLetter: "D"
+publishDate: 2021-02-22
+categories:
+- Mathematics
+- Physics
+
+date: 2021-02-22T21:35:38+01:00
+draft: false
+markup: pandoc
+---
+
+# Dirac delta function
+
+The **Dirac delta function** $\delta(x)$, often just called the **delta function**,
+is an infinitely narrow discontinuous "spike" at $x = 0$ whose area is
+defined to be 1:
+
+$$\begin{aligned}
+ \boxed{
+ \delta(x) =
+ \begin{cases}
+ +\infty & \mathrm{if}\: x = 0 \\
+ 0 & \mathrm{if}\: x \neq 0
+ \end{cases}
+ \quad \mathrm{and} \quad
+ \int_{-\varepsilon}^\varepsilon \delta(x) \dd{x} = 1
+ }
+\end{aligned}$$
+
+It is sometimes also called the **sampling function**, due to its most
+important property: the so-called **sampling property**:
+
+$$\begin{aligned}
+ \boxed{
+ \int f(x) \: \delta(x - x_0) \: dx = \int f(x) \: \delta(x_0 - x) \: dx = f(x_0)
+ }
+\end{aligned}$$
+
+$\delta(x)$ is thus an effective weapon against integrals. This may not seem very
+useful due to its "unnatural" definition, but in fact it appears as the
+limit of several reasonable functions:
+
+$$\begin{aligned}
+ \delta(x)
+ = \lim_{n \to +\infty} \!\Big\{ \frac{n}{\sqrt{\pi}} \exp(- n^2 x^2) \Big\}
+ = \lim_{n \to +\infty} \!\Big\{ \frac{n}{\pi} \frac{1}{1 + n^2 x^2} \Big\}
+ = \lim_{n \to +\infty} \!\Big\{ \frac{\sin(n x)}{\pi x} \Big\}
+\end{aligned}$$
+
+The last one is especially important, since it is equivalent to the
+following integral, which appears very often in the context of
+[Fourier transforms](/know/concept/fourier-transform/):
+
+$$\begin{aligned}
+ \boxed{
+ \delta(x)
+ %= \lim_{n \to +\infty} \!\Big\{\frac{\sin(n x)}{\pi x}\Big\}
+ = \frac{1}{2\pi} \int_{-\infty}^\infty \exp(i k x) \dd{k}
+ \:\:\propto\:\: \hat{\mathcal{F}}\{1\}
+ }
+\end{aligned}$$
+
+When the argument of $\delta(x)$ is scaled, the delta function is itself scaled:
+
+$$\begin{aligned}
+ \boxed{
+ \delta(s x) = \frac{1}{|s|} \delta(x)
+ }
+\end{aligned}$$
+
+*__Proof.__ Because it is symmetric, $\delta(s x) = \delta(|s| x)$. Then by
+substituting $\sigma = |s| x$:*
+
+$$\begin{aligned}
+ \int \delta(|s| x) \dd{x}
+ &= \frac{1}{|s|} \int \delta(\sigma) \dd{\sigma} = \frac{1}{|s|}
+\end{aligned}$$
+
+*__Q.E.D.__*
+
+An even more impressive property is the behaviour of the derivative of
+$\delta(x)$:
+
+$$\begin{aligned}
+ \boxed{
+ \int f(\xi) \: \delta'(x - \xi) \dd{\xi} = f'(x)
+ }
+\end{aligned}$$
+
+*__Proof.__ Note which variable is used for the
+differentiation, and that $\delta'(x - \xi) = - \delta'(\xi - x)$:*
+
+$$\begin{aligned}
+ \int f(\xi) \: \dv{\delta(x - \xi)}{x} \dd{\xi}
+ &= \dv{x} \int f(\xi) \: \delta(x - \xi) \dd{x}
+ = f'(x)
+\end{aligned}$$
+
+*__Q.E.D.__*
+
+This property also generalizes nicely for the higher-order derivatives:
+
+$$\begin{aligned}
+ \boxed{
+ \int f(\xi) \: \dv[n]{\delta(x - \xi)}{x} \dd{\xi} = \dv[n]{f(x)}{x}
+ }
+\end{aligned}$$
diff --git a/content/know/concept/dirac-notation/index.pdc b/content/know/concept/dirac-notation/index.pdc
new file mode 100644
index 0000000..f624574
--- /dev/null
+++ b/content/know/concept/dirac-notation/index.pdc
@@ -0,0 +1,129 @@
+---
+title: "Dirac notation"
+firstLetter: "D"
+publishDate: 2021-02-22
+categories:
+- Quantum mechanics
+- Physics
+
+date: 2021-02-22T21:35:46+01:00
+draft: false
+markup: pandoc
+---
+
+# Dirac notation
+
+**Dirac notation** is a notation to do calculations in a Hilbert space
+without needing to worry about the space's representation. It is
+basically the *lingua franca* of quantum mechanics.
+
+In Dirac notation there are **kets** $\ket{V}$ from the Hilbert space
+$\mathbb{H}$ and **bras** $\bra{V}$ from a dual $\mathbb{H}'$ of the
+former. Crucially, the bras and kets are from different Hilbert spaces
+and therefore cannot be added, but every bra has a corresponding ket and
+vice versa.
+
+Bras and kets can be combined in two ways: the **inner product**
+$\braket{V}{W}$, which returns a scalar, and the **outer product**
+$\ket{V} \bra{W}$, which returns a mapping $\hat{L}$ from kets $\ket{V}$
+to other kets $\ket{V'}$, i.e. a linear operator. Recall that the
+Hilbert inner product must satisfy:
+
+$$\begin{aligned}
+ \braket{V}{W} = \braket{W}{V}^*
+\end{aligned}$$
+
+So far, nothing has been said about the actual representation of bras or
+kets. If we represent kets as $N$-dimensional columns vectors, the
+corresponding bras are given by the kets' adjoints, i.e. their transpose
+conjugates:
+
+$$\begin{aligned}
+ \ket{V} =
+ \begin{bmatrix}
+ v_1 \\ \vdots \\ v_N
+ \end{bmatrix}
+ \quad \implies \quad
+ \bra{V} =
+ \begin{bmatrix}
+ v_1^* & \cdots & v_N^*
+ \end{bmatrix}
+\end{aligned}$$
+
+The inner product $\braket{V}{W}$ is then just the familiar dot product $V \cdot W$:
+
+$$\begin{gathered}
+ \braket{V}{W}
+ =
+ \begin{bmatrix}
+ v_1^* & \cdots & v_N^*
+ \end{bmatrix}
+ \cdot
+ \begin{bmatrix}
+ w_1 \\ \vdots \\ w_N
+ \end{bmatrix}
+ = v_1^* w_1 + ... + v_N^* w_N
+\end{gathered}$$
+
+Meanwhile, the outer product $\ket{V} \bra{W}$ creates an $N \cross N$ matrix:
+
+$$\begin{gathered}
+ \ket{V} \bra{W}
+ =
+ \begin{bmatrix}
+ v_1 \\ \vdots \\ v_N
+ \end{bmatrix}
+ \cdot
+ \begin{bmatrix}
+ w_1^* & \cdots & w_N^*
+ \end{bmatrix}
+ =
+ \begin{bmatrix}
+ v_1 w_1^* & \cdots & v_1 w_N^* \\
+ \vdots & \ddots & \vdots \\
+ v_N w_1^* & \cdots & v_N w_N^*
+ \end{bmatrix}
+\end{gathered}$$
+
+If the kets are instead represented by functions $f(x)$ of
+$x \in [a, b]$, then the bras represent *functionals* $F[u(x)]$ which
+take an unknown function $u(x)$ as an argument and turn it into a scalar
+using integration:
+
+$$\begin{aligned}
+ \ket{f} = f(x)
+ \quad \implies \quad
+ \bra{f}
+ = F[u(x)]
+ = \int_a^b f^*(x) \: u(x) \dd{x}
+\end{aligned}$$
+
+Consequently, the inner product is simply the following familiar integral:
+
+$$\begin{gathered}
+ \braket{f}{g}
+ = F[g(x)]
+ = \int_a^b f^*(x) \: g(x) \dd{x}
+\end{gathered}$$
+
+However, the outer product becomes something rather abstract:
+
+$$\begin{gathered}
+ \ket{f} \bra{g}
+ = f(x) \: G[u(x)]
+ = f(x) \int_a^b g^*(\xi) \: u(\xi) \dd{\xi}
+\end{gathered}$$
+
+This result makes more sense if we surround it by a bra and a ket:
+
+$$\begin{aligned}
+ \bra{u} \!\Big(\!\ket{f} \bra{g}\!\Big)\! \ket{w}
+ &= U\big[f(x) \: G[w(x)]\big]
+ = U\Big[ f(x) \int_a^b g^*(\xi) \: w(\xi) \dd{\xi} \Big]
+ \\
+ &= \int_a^b u^*(x) \: f(x) \: \Big(\int_a^b g^*(\xi) \: w(\xi) \dd{\xi} \Big) \dd{x}
+ \\
+ &= \Big( \int_a^b u^*(x) \: f(x) \dd{x} \Big) \Big( \int_a^b g^*(\xi) \: w(\xi) \dd{\xi} \Big)
+ \\
+ &= \braket{u}{f} \braket{g}{w}
+\end{aligned}$$
diff --git a/content/know/concept/fourier-transform/index.pdc b/content/know/concept/fourier-transform/index.pdc
new file mode 100644
index 0000000..6d8901a
--- /dev/null
+++ b/content/know/concept/fourier-transform/index.pdc
@@ -0,0 +1,117 @@
+---
+title: "Fourier transform"
+firstLetter: "F"
+publishDate: 2021-02-22
+categories:
+- Mathematics
+- Physics
+
+date: 2021-02-22T21:35:54+01:00
+draft: false
+markup: pandoc
+---
+
+# Fourier transform
+
+The **Fourier transform** (FT) is an integral transform which converts a
+function $f(x)$ into its frequency representation $\tilde{f}(k)$.
+Great volumes have already been written about this subject,
+so let us focus on the aspects that are useful to physicists.
+
+The **forward** FT is defined as follows, where $A$, $B$, and $s$ are unspecified constants
+(for now):
+
+$$\begin{aligned}
+ \boxed{
+ \tilde{f}(k)
+ = \hat{\mathcal{F}}\{f(x)\}
+ = A \int_{-\infty}^\infty f(x) \exp(i s k x) \dd{x}
+ }
+\end{aligned}$$
+
+The **inverse Fourier transform** (iFT) undoes the forward FT operation:
+
+$$\begin{aligned}
+ \boxed{
+ f(x)
+ = \hat{\mathcal{F}}^{-1}\{\tilde{f}(k)\}
+ = B \int_{-\infty}^\infty \tilde{f}(k) \exp(- i s k x) \dd{k}
+ }
+\end{aligned}$$
+
+Clearly, the inverse FT of the forward FT of $f(x)$ must equal $f(x)$
+again. Let us verify this, by rearranging the integrals to get the
+[Dirac delta function](/know/concept/dirac-delta-function/) $\delta(x)$:
+
+$$\begin{aligned}
+ \hat{\mathcal{F}}^{-1}\{\hat{\mathcal{F}}\{f(x)\}\}
+ &= A B \int_{-\infty}^\infty \exp(-i s k x) \int_{-\infty}^\infty f(x') \exp(i s k x') \dd{x'} \dd{k}
+ \\
+ &= 2 \pi A B \int_{-\infty}^\infty f(x') \Big(\frac{1}{2\pi} \int_{-\infty}^\infty \exp(i s k (x' - x)) \dd{k} \Big) \dd{x'}
+ \\
+ &= 2 \pi A B \int_{-\infty}^\infty f(x') \: \delta(s(x' - x)) \dd{x'}
+ = \frac{2 \pi A B}{|s|} f(x)
+\end{aligned}$$
+
+Therefore, the constants $A$, $B$, and $s$ are subject to the following
+constraint:
+
+$$\begin{aligned}
+ \boxed{\frac{2\pi A B}{|s|} = 1}
+\end{aligned}$$
+
+But that still gives a lot of freedom. The exact choices of $A$ and $B$
+are generally motivated by the [convolution theorem](/know/concept/convolution-theorem/)
+and [Parseval's theorem](/know/concept/parsevals-theorem/).
+
+The choice of $|s|$ depends on whether the frequency variable $k$
+represents the angular ($|s| = 1$) or the physical ($|s| = 2\pi$)
+frequency. The sign of $s$ is not so important, but is generally based
+on whether the analysis is for forward ($s > 0$) or backward-propagating
+($s < 0$) waves.
+
+
+## Derivatives
+
+The FT of a derivative has a very interesting property.
+Below, after integrating by parts, we remove the boundary term by
+assuming that $f(x)$ is localized, i.e. $f(x) \to 0$ for $x \to \pm \infty$:
+
+$$\begin{aligned}
+ \hat{\mathcal{F}}\{f'(x)\}
+ &= A \int_{-\infty}^\infty f'(x) \exp(i s k x) \dd{x}
+ \\
+ &= A \big[ f(x) \exp(i s k x) \big]_{-\infty}^\infty - i s k A \int_{-\infty}^\infty f(x) \exp(i s k x) \dd{x}
+ \\
+ &= (- i s k) \tilde{f}(k)
+\end{aligned}$$
+
+Therefore, as long as $f(x)$ is localized, the FT eliminates derivatives
+of the transformed variable, which makes it useful against PDEs:
+
+$$\begin{aligned}
+ \boxed{
+ \hat{\mathcal{F}}\{f'(x)\} = (- i s k) \tilde{f}(k)
+ }
+\end{aligned}$$
+
+This generalizes to higher-order derivatives, as long as these
+derivatives are also localized in the $x$-domain, which is practically
+guaranteed if $f(x)$ itself is localized:
+
+$$\begin{aligned}
+ \boxed{
+ \hat{\mathcal{F}} \Big\{ \dv[n]{f}{x} \Big\}
+ = (- i s k)^n \tilde{f}(k)
+ }
+\end{aligned}$$
+
+Derivatives in the frequency domain have an analogous property:
+
+$$\begin{aligned}
+ \boxed{
+ \dv[n]{\tilde{f}}{k}
+ = A \int_{-\infty}^\infty (i s x)^n f(x) \exp(i s k x) \dd{x}
+ = \hat{\mathcal{F}}\{ (i s x)^n f(x) \}
+ }
+\end{aligned}$$
diff --git a/content/know/concept/gram-schmidt-method/index.pdc b/content/know/concept/gram-schmidt-method/index.pdc
new file mode 100644
index 0000000..88488dd
--- /dev/null
+++ b/content/know/concept/gram-schmidt-method/index.pdc
@@ -0,0 +1,47 @@
+---
+title: "Gram-Schmidt method"
+firstLetter: "G"
+publishDate: 2021-02-22
+categories:
+- Mathematics
+
+date: 2021-02-22T21:36:08+01:00
+draft: false
+markup: pandoc
+---
+
+# Gram-Schmidt method
+
+Given a set of linearly independent non-orthonormal vectors
+$\ket*{V_1}, \ket*{V_2}, ...$ from a [Hilbert space](/know/concept/hilbert-space/),
+the **Gram-Schmidt method**
+turns them into an orthonormal set $\ket*{n_1}, \ket*{n_2}, ...$ as follows:
+
+1. Take the first vector $\ket*{V_1}$ and normalize it to get $\ket*{n_1}$:
+
+ $$\begin{aligned}
+ \ket*{n_1} = \frac{\ket*{V_1}}{\sqrt{\braket*{V_1}{V_1}}}
+ \end{aligned}$$
+
+2. Begin loop. Take the next non-orthonormal vector $\ket*{V_j}$, and
+ subtract from it its projection onto every already-processed vector:
+
+ $$\begin{aligned}
+ \ket*{n_j'} = \ket*{V_j} - \ket*{n_1} \braket*{n_1}{V_j} - \ket*{n_2} \braket*{n_2}{V_j} - ... - \ket*{n_{j-1}} \braket*{n_{j-1}}{V_{j-1}}
+ \end{aligned}$$
+
+ This leaves only the part of $\ket*{V_j}$ which is orthogonal to
+ $\ket*{n_1}$, $\ket*{n_2}$, etc. This why the input vectors must be
+ linearly independent; otherwise $\ket{n_j'}$ may become zero at some
+ point.
+
+3. Normalize the resulting ortho*gonal* vector $\ket*{n_j'}$ to make it
+ ortho*normal*:
+
+ $$\begin{aligned}
+ \ket*{n_j} = \frac{\ket*{n_j'}}{\sqrt{\braket*{n_j'}{n_j'}}}
+ \end{aligned}$$
+
+4. Loop back to step 2, taking the next vector $\ket*{V_{j+1}}$.
+
+If you are unfamiliar with this notation, take a look at [Dirac notation](/know/concept/dirac-notation/).
diff --git a/content/know/concept/hilbert-space/index.pdc b/content/know/concept/hilbert-space/index.pdc
new file mode 100644
index 0000000..1faf08a
--- /dev/null
+++ b/content/know/concept/hilbert-space/index.pdc
@@ -0,0 +1,202 @@
+---
+title: "Hilbert space"
+firstLetter: "H"
+publishDate: 2021-02-22
+categories:
+- Mathematics
+- Quantum mechanics
+
+date: 2021-02-22T21:36:24+01:00
+draft: false
+markup: pandoc
+---
+
+# Hilbert space
+
+A **Hilbert space**, also known as an **inner product space**, is an
+abstract **vector space** with a notion of length and angle.
+
+
+## Vector space
+
+An abstract **vector space** $\mathbb{V}$ is a generalization of the
+traditional concept of vectors as "arrows". It consists of a set of
+objects called **vectors** which support the following (familiar)
+operations:
+
++ **Vector addition**: the sum of two vectors $V$ and $W$, denoted $V + W$.
++ **Scalar multiplication**: product of a vector $V$ with a scalar $a$, denoted $a V$.
+
+In addition, for a given $\mathbb{V}$ to qualify as a proper vector
+space, these operations must obey the following axioms:
+
++ **Addition is associative**: $U + (V + W) = (U + V) + W$
++ **Addition is commutative**: $U + V = V + U$
++ **Addition has an identity**: there exists a $\mathbf{0}$ such that $V + 0 = V$
++ **Addition has an inverse**: for every $V$ there exists $-V$ so that $V + (-V) = 0$
++ **Multiplication is associative**: $a (b V) = (a b) V$
++ **Multiplication has an identity**: There exists a $1$ such that $1 V = V$
++ **Multiplication is distributive over scalars**: $(a + b)V = aV + bV$
++ **Multiplication is distributive over vectors**: $a (U + V) = a U + a V$
+
+A set of $N$ vectors $V_1, V_2, ..., V_N$ is **linearly independent** if
+the only way to satisfy the following relation is to set all the scalar coefficients $a_n = 0$:
+
+$$\begin{aligned}
+ \mathbf{0} = \sum_{n = 1}^N a_n V_n
+\end{aligned}$$
+
+In other words, these vectors cannot be expressed in terms of each
+other. Otherwise, they would be **linearly dependent**.
+
+A vector space $\mathbb{V}$ has **dimension** $N$ if only up to $N$ of
+its vectors can be linearly indepedent. All other vectors in
+$\mathbb{V}$ can then be written as a **linear combination** of these $N$ **basis vectors**.
+
+Let $\vu{e}_1, ..., \vu{e}_N$ be the basis vectors, then any
+vector $V$ in the same space can be **expanded** in the basis according to
+the unique weights $v_n$, known as the **components** of $V$
+in that basis:
+
+$$\begin{aligned}
+ V = \sum_{n = 1}^N v_n \vu{e}_n
+\end{aligned}$$
+
+Using these, the vector space operations can then be implemented as follows:
+
+$$\begin{gathered}
+ V = \sum_{n = 1} v_n \vu{e}_n
+ \quad
+ W = \sum_{n = 1} w_n \vu{e}_n
+ \\
+ \quad \implies \quad
+ V + W = \sum_{n = 1}^N (v_n + w_n) \vu{e}_n
+ \qquad
+ a V = \sum_{n = 1}^N a v_n \vu{e}_n
+\end{gathered}$$
+
+
+## Inner product
+
+A given vector space $\mathbb{V}$ can be promoted to a **Hilbert space**
+or **inner product space** if it supports an operation $\braket{U}{V}$
+called the **inner product**, which takes two vectors and returns a
+scalar, and has the following properties:
+
++ **Skew symmetry**: $\braket{U}{V} = (\braket{V}{U})^*$, where ${}^*$ is the complex conjugate.
++ **Positive semidefiniteness**: $\braket{V}{V} \ge 0$, and $\braket{V}{V} = 0$ if $V = \mathbf{0}$.
++ **Linearity in second operand**: $\braket{U}{(a V + b W)} = a \braket{U}{V} + b \braket{U}{W}$.
+
+The inner product describes the lengths and angles of vectors, and in
+Euclidean space it is implemented by the dot product.
+
+The **magnitude** or **norm** $|V|$ of a vector $V$ is given by
+$|V| = \sqrt{\braket{V}{V}}$ and represents the real positive length of $V$.
+A **unit vector** has a norm of 1.
+
+Two vectors $U$ and $V$ are **orthogonal** if their inner product
+$\braket{U}{V} = 0$. If in addition to being orthogonal, $|U| = 1$ and
+$|V| = 1$, then $U$ and $V$ are known as **orthonormal** vectors.
+
+Orthonormality is desirable for basis vectors, so if they are
+not already like that, it is common to manually turn them into a new
+orthonormal basis using e.g. the [Gram-Schmidt method](/know/concept/gram-schmidt-method).
+
+As for the implementation of the inner product, it is given by:
+
+$$\begin{gathered}
+ V = \sum_{n = 1}^N v_n \vu{e}_n
+ \quad
+ W = \sum_{n = 1}^N w_n \vu{e}_n
+ \\
+ \quad \implies \quad
+ \braket{V}{W} = \sum_{n = 1}^N \sum_{m = 1}^N v_n^* w_m \braket{\vu{e}_n}{\vu{e}_j}
+\end{gathered}$$
+
+If the basis vectors $\vu{e}_1, ..., \vu{e}_N$ are already
+orthonormal, this reduces to:
+
+$$\begin{aligned}
+ \braket{V}{W} = \sum_{n = 1}^N v_n^* w_n
+\end{aligned}$$
+
+As it turns out, the components $v_n$ are given by the inner product
+with $\vu{e}_n$, where $\delta_{nm}$ is the Kronecker delta:
+
+$$\begin{aligned}
+ \braket{\vu{e}_n}{V} = \sum_{m = 1}^N \delta_{nm} v_m = v_n
+\end{aligned}$$
+
+
+## Infinite dimensions
+
+As the dimensionality $N$ tends to infinity, things may or may not
+change significantly, depending on whether $N$ is **countably** or
+**uncountably** infinite.
+
+In the former case, not much changes: the infinitely many **discrete**
+basis vectors $\vu{e}_n$ can all still be made orthonormal as usual,
+and as before:
+
+$$\begin{aligned}
+ V = \sum_{n = 1}^\infty v_n \vu{e}_n
+\end{aligned}$$
+
+A good example of such a countably-infinitely-dimensional basis are the
+solution eigenfunctions of a [Sturm-Liouville problem](/know/concept/sturm-liouville-theory/).
+
+However, if the dimensionality is uncountably infinite, the basis
+vectors are **continuous** and cannot be labeled by $n$. For example, all
+complex functions $f(x)$ defined for $x \in [a, b]$ which
+satisfy $f(a) = f(b) = 0$ form such a vector space.
+In this case $f(x)$ is expanded as follows, where $x$ is a basis vector:
+
+$$\begin{aligned}
+ f(x) = \int_a^b \braket{x}{f} \dd{x}
+\end{aligned}$$
+
+Similarly, the inner product $\braket{f}{g}$ must also be redefined as
+follows:
+
+$$\begin{aligned}
+ \braket{f}{g} = \int_a^b f^*(x) \: g(x) \dd{x}
+\end{aligned}$$
+
+The concept of orthonormality must be also weakened. A finite function
+$f(x)$ can be normalized as usual, but the basis vectors $x$ themselves
+cannot, since each represents an infinitesimal section of the real line.
+
+The rationale in this case is that action of the identity operator $\hat{I}$ must
+be preserved, which is given here in [Dirac notation](/know/concept/dirac-notation/):
+
+$$\begin{aligned}
+ \hat{I} = \int_a^b \ket{\xi} \bra{\xi} \dd{\xi}
+\end{aligned}$$
+
+Applying the identity operator to $f(x)$ should just give $f(x)$ again:
+
+$$\begin{aligned}
+ f(x) = \braket{x}{f} = \matrixel{x}{\hat{I}}{f}
+ = \int_a^b \braket{x}{\xi} \braket{\xi}{f} \dd{\xi}
+ = \int_a^b \braket{x}{\xi} f(\xi) \dd{\xi}
+\end{aligned}$$
+
+Since we want the latter integral to reduce to $f(x)$, it is plain to see that
+$\braket{x}{\xi}$ can only be a [Dirac delta function](/know/concept/dirac-delta-function/),
+i.e $\braket{x}{\xi} = \delta(x - \xi)$:
+
+$$\begin{aligned}
+ \int_a^b \braket{x}{\xi} f(\xi) \dd{\xi}
+ = \int_a^b \delta(x - \xi) f(\xi) \dd{\xi}
+ = f(x)
+\end{aligned}$$
+
+Consequently, $\braket{x}{\xi} = 0$ if $x \neq \xi$ as expected for an
+orthogonal set of basis vectors, but if $x = \xi$ the inner product
+$\braket{x}{\xi}$ is infinite, unlike earlier.
+
+Technically, because the basis vectors $x$ cannot be normalized, they
+are not members of a Hilbert space, but rather of a superset called a
+**rigged Hilbert space**. Such vectors have no finite inner product with
+themselves, but do have one with all vectors from the actual Hilbert
+space.
diff --git a/content/know/concept/legendre-transform/index.pdc b/content/know/concept/legendre-transform/index.pdc
new file mode 100644
index 0000000..8a0d3e3
--- /dev/null
+++ b/content/know/concept/legendre-transform/index.pdc
@@ -0,0 +1,89 @@
+---
+title: "Legendre transform"
+firstLetter: "L"
+publishDate: 2021-02-22
+categories:
+- Mathematics
+- Physics
+
+date: 2021-02-22T21:36:35+01:00
+draft: false
+markup: pandoc
+---
+
+# Legendre transform
+
+The **Legendre transform** of a function $f(x)$ is a new function $L(f')$,
+which depends only on the derivative $f'(x)$ of $f(x)$, and from which
+the original function $f(x)$ can be reconstructed. The point is,
+analogously to other transforms (e.g. [Fourier](/know/concept/fourier-transform/)),
+that $L(f')$ contains the same information as $f(x)$, just in a different form.
+
+Let us choose an arbitrary point $x_0 \in [a, b]$ in the domain of
+$f(x)$. Consider a line $y(x)$ tangent to $f(x)$ at $x = x_0$, which has
+a slope $f'(x_0)$ and intersects the $y$-axis at $-C$:
+
+$$\begin{aligned}
+ y(x) = f'(x_0) (x - x_0) + f(x_0) = f'(x_0) x - C
+\end{aligned}$$
+
+The Legendre transform $L(f')$ is defined such that $L(f'(x_0)) = C$ (or
+sometimes $-C$ instead) for all $x_0 \in [a, b]$, where $C$ is the
+constant corresponding to the tangent line at $x = x_0$. This yields:
+
+$$\begin{aligned}
+ L(f'(x)) = f'(x) \: x - f(x)
+\end{aligned}$$
+
+We want this function to depend only on the derivative $f'$, but
+currently $x$ still appears here as a variable. We fix that problem in
+the easiest possible way: by assuming that $f'(x)$ is invertible for all
+$x \in [a, b]$. If $x(f')$ is the inverse of $f'(x)$, then $L(f')$ is
+given by:
+
+$$\begin{aligned}
+ \boxed{
+ L(f') = f' \: x(f') - f(x(f'))
+ }
+\end{aligned}$$
+
+The only requirement for the existence of the Legendre transform is thus
+the invertibility of $f'(x)$ in the target interval $[a,b]$, which can
+only be true if $f(x)$ is either convex or concave, i.e. its derivative
+$f'(x)$ is monotonic.
+