From b5608ab92a4f8a5140571acabf54e3c6bdebd0e4 Mon Sep 17 00:00:00 2001 From: Prefetch Date: Wed, 24 Feb 2021 09:47:22 +0100 Subject: Initial commit --- content/know/concept/_index.md | 8 + content/know/concept/blochs-theorem/index.pdc | 115 +++++++ content/know/concept/convolution-theorem/index.pdc | 100 ++++++ .../know/concept/dirac-delta-function/index.pdc | 109 +++++++ content/know/concept/dirac-notation/index.pdc | 129 ++++++++ content/know/concept/fourier-transform/index.pdc | 117 +++++++ content/know/concept/gram-schmidt-method/index.pdc | 47 +++ content/know/concept/hilbert-space/index.pdc | 202 ++++++++++++ content/know/concept/legendre-transform/index.pdc | 89 ++++++ content/know/concept/parsevals-theorem/index.pdc | 76 +++++ .../partial-fraction-decomposition/index.pdc | 60 ++++ .../concept/pauli-exclusion-principle/index.pdc | 125 ++++++++ content/know/concept/probability-current/index.pdc | 98 ++++++ content/know/concept/slater-determinant/index.pdc | 54 ++++ .../know/concept/sturm-liouville-theory/index.pdc | 346 +++++++++++++++++++++ .../time-independent-perturbation-theory/index.pdc | 329 ++++++++++++++++++++ .../index.pdc | 198 ++++++++++++ 17 files changed, 2202 insertions(+) create mode 100644 content/know/concept/_index.md create mode 100644 content/know/concept/blochs-theorem/index.pdc create mode 100644 content/know/concept/convolution-theorem/index.pdc create mode 100644 content/know/concept/dirac-delta-function/index.pdc create mode 100644 content/know/concept/dirac-notation/index.pdc create mode 100644 content/know/concept/fourier-transform/index.pdc create mode 100644 content/know/concept/gram-schmidt-method/index.pdc create mode 100644 content/know/concept/hilbert-space/index.pdc create mode 100644 content/know/concept/legendre-transform/index.pdc create mode 100644 content/know/concept/parsevals-theorem/index.pdc create mode 100644 content/know/concept/partial-fraction-decomposition/index.pdc create mode 100644 content/know/concept/pauli-exclusion-principle/index.pdc create mode 100644 content/know/concept/probability-current/index.pdc create mode 100644 content/know/concept/slater-determinant/index.pdc create mode 100644 content/know/concept/sturm-liouville-theory/index.pdc create mode 100644 content/know/concept/time-independent-perturbation-theory/index.pdc create mode 100644 content/know/concept/wentzel-kramers-brillouin-approximation/index.pdc (limited to 'content/know/concept') diff --git a/content/know/concept/_index.md b/content/know/concept/_index.md new file mode 100644 index 0000000..956724a --- /dev/null +++ b/content/know/concept/_index.md @@ -0,0 +1,8 @@ +--- +title: "List of concepts" +date: 2021-02-22T20:38:58+01:00 +draft: false +layout: "know-list" +--- + +This is an alphabetical list of the concepts in this knowledge base. diff --git a/content/know/concept/blochs-theorem/index.pdc b/content/know/concept/blochs-theorem/index.pdc new file mode 100644 index 0000000..1828d8a --- /dev/null +++ b/content/know/concept/blochs-theorem/index.pdc @@ -0,0 +1,115 @@ +--- +title: "Bloch's theorem" +firstLetter: "B" +publishDate: 2021-02-22 +categories: +- Quantum mechanics + +date: 2021-02-22T20:02:14+01:00 +draft: false +markup: pandoc +--- + +# Bloch's theorem +In quantum mechanics, **Bloch's theorem** states that, +given a potential $V(\vec{r})$ which is periodic on a lattice, +i.e. $V(\vec{r}) = V(\vec{r} + \vec{a})$ +for a primitive lattice vector $\vec{a}$, +then it follows that the solutions $\psi(\vec{r})$ +to the time-independent Schrödinger equation +take the following form, +where the function $u(\vec{r})$ is periodic on the same lattice, +i.e. $u(\vec{r}) = u(\vec{r} + \vec{a})$: + +$$ +\begin{aligned} + \boxed{ + \psi(\vec{r}) = u(\vec{r}) e^{i \vec{k} \cdot \vec{r}} + } +\end{aligned} +$$ + +In other words, in a periodic potential, +the solutions are simply plane waves with a periodic modulation, +known as **Bloch functions** or **Bloch states**. + +This is suprisingly easy to prove: +if the Hamiltonian $\hat{H}$ is lattice-periodic, +then both $\psi(\vec{r})$ and $\psi(\vec{r} + \vec{a})$ +are eigenstates with the same energy: + +$$ +\begin{aligned} + \hat{H} \psi(\vec{r}) = E \psi(\vec{r}) + \qquad + \hat{H} \psi(\vec{r} + \vec{a}) = E \psi(\vec{r} + \vec{a}) +\end{aligned} +$$ + +Now define the unitary translation operator $\hat{T}(\vec{a})$ such that +$\psi(\vec{r} + \vec{a}) = \hat{T}(\vec{a}) \psi(\vec{r})$. +From the previous equation, we then know that: + +$$ +\begin{aligned} + \hat{H} \hat{T}(\vec{a}) \psi(\vec{r}) + = E \hat{T}(\vec{a}) \psi(\vec{r}) + = \hat{T}(\vec{a}) \big(E \psi(\vec{r})\big) + = \hat{T}(\vec{a}) \hat{H} \psi(\vec{r}) +\end{aligned} +$$ + +In other words, if $\hat{H}$ is lattice-periodic, +then it will commute with $\hat{T}(\vec{a})$, +i.e. $[\hat{H}, \hat{T}(\vec{a})] = 0$. +Consequently, $\hat{H}$ and $\hat{T}(\vec{a})$ must share eigenstates $\psi(\vec{r})$: + +$$ +\begin{aligned} + \hat{H} \:\psi(\vec{r}) = E \:\psi(\vec{r}) + \qquad + \hat{T}(\vec{a}) \:\psi(\vec{r}) = \tau \:\psi(\vec{r}) +\end{aligned} +$$ + +Since $\hat{T}$ is unitary, +its eigenvalues $\tau$ must have the form $e^{i \theta}$, with $\theta$ real. +Therefore a translation by $\vec{a}$ causes a phase shift, +for some vector $\vec{k}$: + +$$ +\begin{aligned} + \psi(\vec{r} + \vec{a}) + = \hat{T}(\vec{a}) \:\psi(\vec{r}) + = e^{i \theta} \:\psi(\vec{r}) + = e^{i \vec{k} \cdot \vec{a}} \:\psi(\vec{r}) +\end{aligned} +$$ + +Let us now define the following function, +keeping our arbitrary choice of $\vec{k}$: + +$$ +\begin{aligned} + u(\vec{r}) + = e^{- i \vec{k} \cdot \vec{r}} \:\psi(\vec{r}) +\end{aligned} +$$ + +As it turns out, this function is guaranteed to be lattice-periodic for any $\vec{k}$: + +$$ +\begin{aligned} + u(\vec{r} + \vec{a}) + &= e^{- i \vec{k} \cdot (\vec{r} + \vec{a})} \:\psi(\vec{r} + \vec{a}) + \\ + &= e^{- i \vec{k} \cdot \vec{r}} e^{- i \vec{k} \cdot \vec{a}} e^{i \vec{k} \cdot \vec{a}} \:\psi(\vec{r}) + \\ + &= e^{- i \vec{k} \cdot \vec{r}} \:\psi(\vec{r}) + \\ + &= u(\vec{r}) +\end{aligned} +$$ + +Then Bloch's theorem follows from +isolating the definition of $u(\vec{r})$ for $\psi(\vec{r})$. diff --git a/content/know/concept/convolution-theorem/index.pdc b/content/know/concept/convolution-theorem/index.pdc new file mode 100644 index 0000000..fc96f30 --- /dev/null +++ b/content/know/concept/convolution-theorem/index.pdc @@ -0,0 +1,100 @@ +--- +title: "Convolution theorem" +firstLetter: "C" +publishDate: 2021-02-22 +categories: +- Mathematics + +date: 2021-02-22T21:35:23+01:00 +draft: false +markup: pandoc +--- + +# Convolution theorem + +The **convolution theorem** states that a convolution in the direct domain +is equal to a product in the frequency domain. This is especially useful +for computation, replacing an $\mathcal{O}(n^2)$ convolution with an +$\mathcal{O}(n \log(n))$ transform and product. + +## Fourier transform + +The convolution theorem is usually expressed as follows, where +$\hat{\mathcal{F}}$ is the [Fourier transform](/know/concept/fourier-transform/), +and $A$ and $B$ are constants from its definition: + +$$\begin{aligned} + \boxed{ + \begin{aligned} + A \cdot (f * g)(x) &= \hat{\mathcal{F}}^{-1}\{\tilde{f}(k) \: \tilde{g}(k)\} \\ + B \cdot (\tilde{f} * \tilde{g})(k) &= \hat{\mathcal{F}}\{f(x) \: g(x)\} + \end{aligned} + } +\end{aligned}$$ + +To prove this, we expand the right-hand side of the theorem and +rearrange the integrals: + +$$\begin{aligned} + \hat{\mathcal{F}}^{-1}\{\tilde{f}(k) \: \tilde{g}(k)\} + &= B \int_{-\infty}^\infty \tilde{f}(k) \Big( A \int_{-\infty}^\infty g(x') \exp(i s k x') \dd{x'} \Big) \exp(-i s k x) \dd{k} + \\ + &= A \int_{-\infty}^\infty g(x') \Big( B \int_{-\infty}^\infty \tilde{f}(k) \exp(- i s k (x - x')) \dd{k} \Big) \dd{x'} + \\ + &= A \int_{-\infty}^\infty g(x') f(x - x') \dd{x'} + = A \cdot (f * g)(x) +\end{aligned}$$ + +Then we do the same thing again, this time starting from a product in +the $x$-domain: + +$$\begin{aligned} + \hat{\mathcal{F}}\{f(x) \: g(x)\} + &= A \int_{-\infty}^\infty f(x) \Big( B \int_{-\infty}^\infty \tilde{g}(k') \exp(- i s x k') \dd{k'} \Big) \exp(i s k x) \dd{x} + \\ + &= B \int_{-\infty}^\infty \tilde{g}(k') \Big( A \int_{-\infty}^\infty f(x) \exp(i s x (k - k')) \dd{x} \Big) \dd{k'} + \\ + &= B \int_{-\infty}^\infty \tilde{g}(k') \tilde{f}(k - k') \dd{k'} + = B \cdot (\tilde{f} * \tilde{g})(k) +\end{aligned}$$ + + +## Laplace transform + +For functions $f(t)$ and $g(t)$ which are only defined for $t \ge 0$, +the convolution theorem can also be stated using the Laplace transform: + +$$\begin{aligned} + \boxed{(f * g)(t) = \hat{\mathcal{L}}^{-1}\{\tilde{f}(s) \: \tilde{g}(s)\}} +\end{aligned}$$ + +Because the inverse Laplace transform $\hat{\mathcal{L}}^{-1}$ is quite +unpleasant, the theorem is often stated using the forward transform +instead: + +$$\begin{aligned} + \boxed{\hat{\mathcal{L}}\{(f * g)(t)\} = \tilde{f}(s) \: \tilde{g}(s)} +\end{aligned}$$ + +We prove this by expanding the left-hand side. Note that the lower +integration limit is 0 instead of $-\infty$, because we set both $f(t)$ +and $g(t)$ to zero for $t < 0$: + +$$\begin{aligned} + \hat{\mathcal{L}}\{(f * g)(t)\} + &= \int_0^\infty \Big( \int_0^\infty g(t') f(t - t') \dd{t'} \Big) \exp(- s t) \dd{t} + \\ + &= \int_0^\infty \Big( \int_0^\infty f(t - t') \exp(- s t) \dd{t} \Big) g(t') \dd{t'} +\end{aligned}$$ + +Then we define a new integration variable $\tau = t - t'$, yielding: + +$$\begin{aligned} + \hat{\mathcal{L}}\{(f * g)(t)\} + &= \int_0^\infty \Big( \int_0^\infty f(\tau) \exp(- s (\tau + t')) \dd{\tau} \Big) g(t') \dd{t'} + \\ + &= \int_0^\infty \Big( \int_0^\infty f(\tau) \exp(- s \tau) \dd{\tau} \Big) g(t') \exp(- s t') \dd{t'} + \\ + &= \int_0^\infty \tilde{f}(s) g(t') \exp(- s t') \dd{t'} + = \tilde{f}(s) \: \tilde{g}(s) +\end{aligned}$$ diff --git a/content/know/concept/dirac-delta-function/index.pdc b/content/know/concept/dirac-delta-function/index.pdc new file mode 100644 index 0000000..3982afc --- /dev/null +++ b/content/know/concept/dirac-delta-function/index.pdc @@ -0,0 +1,109 @@ +--- +title: "Dirac delta function" +firstLetter: "D" +publishDate: 2021-02-22 +categories: +- Mathematics +- Physics + +date: 2021-02-22T21:35:38+01:00 +draft: false +markup: pandoc +--- + +# Dirac delta function + +The **Dirac delta function** $\delta(x)$, often just called the **delta function**, +is an infinitely narrow discontinuous "spike" at $x = 0$ whose area is +defined to be 1: + +$$\begin{aligned} + \boxed{ + \delta(x) = + \begin{cases} + +\infty & \mathrm{if}\: x = 0 \\ + 0 & \mathrm{if}\: x \neq 0 + \end{cases} + \quad \mathrm{and} \quad + \int_{-\varepsilon}^\varepsilon \delta(x) \dd{x} = 1 + } +\end{aligned}$$ + +It is sometimes also called the **sampling function**, due to its most +important property: the so-called **sampling property**: + +$$\begin{aligned} + \boxed{ + \int f(x) \: \delta(x - x_0) \: dx = \int f(x) \: \delta(x_0 - x) \: dx = f(x_0) + } +\end{aligned}$$ + +$\delta(x)$ is thus an effective weapon against integrals. This may not seem very +useful due to its "unnatural" definition, but in fact it appears as the +limit of several reasonable functions: + +$$\begin{aligned} + \delta(x) + = \lim_{n \to +\infty} \!\Big\{ \frac{n}{\sqrt{\pi}} \exp(- n^2 x^2) \Big\} + = \lim_{n \to +\infty} \!\Big\{ \frac{n}{\pi} \frac{1}{1 + n^2 x^2} \Big\} + = \lim_{n \to +\infty} \!\Big\{ \frac{\sin(n x)}{\pi x} \Big\} +\end{aligned}$$ + +The last one is especially important, since it is equivalent to the +following integral, which appears very often in the context of +[Fourier transforms](/know/concept/fourier-transform/): + +$$\begin{aligned} + \boxed{ + \delta(x) + %= \lim_{n \to +\infty} \!\Big\{\frac{\sin(n x)}{\pi x}\Big\} + = \frac{1}{2\pi} \int_{-\infty}^\infty \exp(i k x) \dd{k} + \:\:\propto\:\: \hat{\mathcal{F}}\{1\} + } +\end{aligned}$$ + +When the argument of $\delta(x)$ is scaled, the delta function is itself scaled: + +$$\begin{aligned} + \boxed{ + \delta(s x) = \frac{1}{|s|} \delta(x) + } +\end{aligned}$$ + +*__Proof.__ Because it is symmetric, $\delta(s x) = \delta(|s| x)$. Then by +substituting $\sigma = |s| x$:* + +$$\begin{aligned} + \int \delta(|s| x) \dd{x} + &= \frac{1}{|s|} \int \delta(\sigma) \dd{\sigma} = \frac{1}{|s|} +\end{aligned}$$ + +*__Q.E.D.__* + +An even more impressive property is the behaviour of the derivative of +$\delta(x)$: + +$$\begin{aligned} + \boxed{ + \int f(\xi) \: \delta'(x - \xi) \dd{\xi} = f'(x) + } +\end{aligned}$$ + +*__Proof.__ Note which variable is used for the +differentiation, and that $\delta'(x - \xi) = - \delta'(\xi - x)$:* + +$$\begin{aligned} + \int f(\xi) \: \dv{\delta(x - \xi)}{x} \dd{\xi} + &= \dv{x} \int f(\xi) \: \delta(x - \xi) \dd{x} + = f'(x) +\end{aligned}$$ + +*__Q.E.D.__* + +This property also generalizes nicely for the higher-order derivatives: + +$$\begin{aligned} + \boxed{ + \int f(\xi) \: \dv[n]{\delta(x - \xi)}{x} \dd{\xi} = \dv[n]{f(x)}{x} + } +\end{aligned}$$ diff --git a/content/know/concept/dirac-notation/index.pdc b/content/know/concept/dirac-notation/index.pdc new file mode 100644 index 0000000..f624574 --- /dev/null +++ b/content/know/concept/dirac-notation/index.pdc @@ -0,0 +1,129 @@ +--- +title: "Dirac notation" +firstLetter: "D" +publishDate: 2021-02-22 +categories: +- Quantum mechanics +- Physics + +date: 2021-02-22T21:35:46+01:00 +draft: false +markup: pandoc +--- + +# Dirac notation + +**Dirac notation** is a notation to do calculations in a Hilbert space +without needing to worry about the space's representation. It is +basically the *lingua franca* of quantum mechanics. + +In Dirac notation there are **kets** $\ket{V}$ from the Hilbert space +$\mathbb{H}$ and **bras** $\bra{V}$ from a dual $\mathbb{H}'$ of the +former. Crucially, the bras and kets are from different Hilbert spaces +and therefore cannot be added, but every bra has a corresponding ket and +vice versa. + +Bras and kets can be combined in two ways: the **inner product** +$\braket{V}{W}$, which returns a scalar, and the **outer product** +$\ket{V} \bra{W}$, which returns a mapping $\hat{L}$ from kets $\ket{V}$ +to other kets $\ket{V'}$, i.e. a linear operator. Recall that the +Hilbert inner product must satisfy: + +$$\begin{aligned} + \braket{V}{W} = \braket{W}{V}^* +\end{aligned}$$ + +So far, nothing has been said about the actual representation of bras or +kets. If we represent kets as $N$-dimensional columns vectors, the +corresponding bras are given by the kets' adjoints, i.e. their transpose +conjugates: + +$$\begin{aligned} + \ket{V} = + \begin{bmatrix} + v_1 \\ \vdots \\ v_N + \end{bmatrix} + \quad \implies \quad + \bra{V} = + \begin{bmatrix} + v_1^* & \cdots & v_N^* + \end{bmatrix} +\end{aligned}$$ + +The inner product $\braket{V}{W}$ is then just the familiar dot product $V \cdot W$: + +$$\begin{gathered} + \braket{V}{W} + = + \begin{bmatrix} + v_1^* & \cdots & v_N^* + \end{bmatrix} + \cdot + \begin{bmatrix} + w_1 \\ \vdots \\ w_N + \end{bmatrix} + = v_1^* w_1 + ... + v_N^* w_N +\end{gathered}$$ + +Meanwhile, the outer product $\ket{V} \bra{W}$ creates an $N \cross N$ matrix: + +$$\begin{gathered} + \ket{V} \bra{W} + = + \begin{bmatrix} + v_1 \\ \vdots \\ v_N + \end{bmatrix} + \cdot + \begin{bmatrix} + w_1^* & \cdots & w_N^* + \end{bmatrix} + = + \begin{bmatrix} + v_1 w_1^* & \cdots & v_1 w_N^* \\ + \vdots & \ddots & \vdots \\ + v_N w_1^* & \cdots & v_N w_N^* + \end{bmatrix} +\end{gathered}$$ + +If the kets are instead represented by functions $f(x)$ of +$x \in [a, b]$, then the bras represent *functionals* $F[u(x)]$ which +take an unknown function $u(x)$ as an argument and turn it into a scalar +using integration: + +$$\begin{aligned} + \ket{f} = f(x) + \quad \implies \quad + \bra{f} + = F[u(x)] + = \int_a^b f^*(x) \: u(x) \dd{x} +\end{aligned}$$ + +Consequently, the inner product is simply the following familiar integral: + +$$\begin{gathered} + \braket{f}{g} + = F[g(x)] + = \int_a^b f^*(x) \: g(x) \dd{x} +\end{gathered}$$ + +However, the outer product becomes something rather abstract: + +$$\begin{gathered} + \ket{f} \bra{g} + = f(x) \: G[u(x)] + = f(x) \int_a^b g^*(\xi) \: u(\xi) \dd{\xi} +\end{gathered}$$ + +This result makes more sense if we surround it by a bra and a ket: + +$$\begin{aligned} + \bra{u} \!\Big(\!\ket{f} \bra{g}\!\Big)\! \ket{w} + &= U\big[f(x) \: G[w(x)]\big] + = U\Big[ f(x) \int_a^b g^*(\xi) \: w(\xi) \dd{\xi} \Big] + \\ + &= \int_a^b u^*(x) \: f(x) \: \Big(\int_a^b g^*(\xi) \: w(\xi) \dd{\xi} \Big) \dd{x} + \\ + &= \Big( \int_a^b u^*(x) \: f(x) \dd{x} \Big) \Big( \int_a^b g^*(\xi) \: w(\xi) \dd{\xi} \Big) + \\ + &= \braket{u}{f} \braket{g}{w} +\end{aligned}$$ diff --git a/content/know/concept/fourier-transform/index.pdc b/content/know/concept/fourier-transform/index.pdc new file mode 100644 index 0000000..6d8901a --- /dev/null +++ b/content/know/concept/fourier-transform/index.pdc @@ -0,0 +1,117 @@ +--- +title: "Fourier transform" +firstLetter: "F" +publishDate: 2021-02-22 +categories: +- Mathematics +- Physics + +date: 2021-02-22T21:35:54+01:00 +draft: false +markup: pandoc +--- + +# Fourier transform + +The **Fourier transform** (FT) is an integral transform which converts a +function $f(x)$ into its frequency representation $\tilde{f}(k)$. +Great volumes have already been written about this subject, +so let us focus on the aspects that are useful to physicists. + +The **forward** FT is defined as follows, where $A$, $B$, and $s$ are unspecified constants +(for now): + +$$\begin{aligned} + \boxed{ + \tilde{f}(k) + = \hat{\mathcal{F}}\{f(x)\} + = A \int_{-\infty}^\infty f(x) \exp(i s k x) \dd{x} + } +\end{aligned}$$ + +The **inverse Fourier transform** (iFT) undoes the forward FT operation: + +$$\begin{aligned} + \boxed{ + f(x) + = \hat{\mathcal{F}}^{-1}\{\tilde{f}(k)\} + = B \int_{-\infty}^\infty \tilde{f}(k) \exp(- i s k x) \dd{k} + } +\end{aligned}$$ + +Clearly, the inverse FT of the forward FT of $f(x)$ must equal $f(x)$ +again. Let us verify this, by rearranging the integrals to get the +[Dirac delta function](/know/concept/dirac-delta-function/) $\delta(x)$: + +$$\begin{aligned} + \hat{\mathcal{F}}^{-1}\{\hat{\mathcal{F}}\{f(x)\}\} + &= A B \int_{-\infty}^\infty \exp(-i s k x) \int_{-\infty}^\infty f(x') \exp(i s k x') \dd{x'} \dd{k} + \\ + &= 2 \pi A B \int_{-\infty}^\infty f(x') \Big(\frac{1}{2\pi} \int_{-\infty}^\infty \exp(i s k (x' - x)) \dd{k} \Big) \dd{x'} + \\ + &= 2 \pi A B \int_{-\infty}^\infty f(x') \: \delta(s(x' - x)) \dd{x'} + = \frac{2 \pi A B}{|s|} f(x) +\end{aligned}$$ + +Therefore, the constants $A$, $B$, and $s$ are subject to the following +constraint: + +$$\begin{aligned} + \boxed{\frac{2\pi A B}{|s|} = 1} +\end{aligned}$$ + +But that still gives a lot of freedom. The exact choices of $A$ and $B$ +are generally motivated by the [convolution theorem](/know/concept/convolution-theorem/) +and [Parseval's theorem](/know/concept/parsevals-theorem/). + +The choice of $|s|$ depends on whether the frequency variable $k$ +represents the angular ($|s| = 1$) or the physical ($|s| = 2\pi$) +frequency. The sign of $s$ is not so important, but is generally based +on whether the analysis is for forward ($s > 0$) or backward-propagating +($s < 0$) waves. + + +## Derivatives + +The FT of a derivative has a very interesting property. +Below, after integrating by parts, we remove the boundary term by +assuming that $f(x)$ is localized, i.e. $f(x) \to 0$ for $x \to \pm \infty$: + +$$\begin{aligned} + \hat{\mathcal{F}}\{f'(x)\} + &= A \int_{-\infty}^\infty f'(x) \exp(i s k x) \dd{x} + \\ + &= A \big[ f(x) \exp(i s k x) \big]_{-\infty}^\infty - i s k A \int_{-\infty}^\infty f(x) \exp(i s k x) \dd{x} + \\ + &= (- i s k) \tilde{f}(k) +\end{aligned}$$ + +Therefore, as long as $f(x)$ is localized, the FT eliminates derivatives +of the transformed variable, which makes it useful against PDEs: + +$$\begin{aligned} + \boxed{ + \hat{\mathcal{F}}\{f'(x)\} = (- i s k) \tilde{f}(k) + } +\end{aligned}$$ + +This generalizes to higher-order derivatives, as long as these +derivatives are also localized in the $x$-domain, which is practically +guaranteed if $f(x)$ itself is localized: + +$$\begin{aligned} + \boxed{ + \hat{\mathcal{F}} \Big\{ \dv[n]{f}{x} \Big\} + = (- i s k)^n \tilde{f}(k) + } +\end{aligned}$$ + +Derivatives in the frequency domain have an analogous property: + +$$\begin{aligned} + \boxed{ + \dv[n]{\tilde{f}}{k} + = A \int_{-\infty}^\infty (i s x)^n f(x) \exp(i s k x) \dd{x} + = \hat{\mathcal{F}}\{ (i s x)^n f(x) \} + } +\end{aligned}$$ diff --git a/content/know/concept/gram-schmidt-method/index.pdc b/content/know/concept/gram-schmidt-method/index.pdc new file mode 100644 index 0000000..88488dd --- /dev/null +++ b/content/know/concept/gram-schmidt-method/index.pdc @@ -0,0 +1,47 @@ +--- +title: "Gram-Schmidt method" +firstLetter: "G" +publishDate: 2021-02-22 +categories: +- Mathematics + +date: 2021-02-22T21:36:08+01:00 +draft: false +markup: pandoc +--- + +# Gram-Schmidt method + +Given a set of linearly independent non-orthonormal vectors +$\ket*{V_1}, \ket*{V_2}, ...$ from a [Hilbert space](/know/concept/hilbert-space/), +the **Gram-Schmidt method** +turns them into an orthonormal set $\ket*{n_1}, \ket*{n_2}, ...$ as follows: + +1. Take the first vector $\ket*{V_1}$ and normalize it to get $\ket*{n_1}$: + + $$\begin{aligned} + \ket*{n_1} = \frac{\ket*{V_1}}{\sqrt{\braket*{V_1}{V_1}}} + \end{aligned}$$ + +2. Begin loop. Take the next non-orthonormal vector $\ket*{V_j}$, and + subtract from it its projection onto every already-processed vector: + + $$\begin{aligned} + \ket*{n_j'} = \ket*{V_j} - \ket*{n_1} \braket*{n_1}{V_j} - \ket*{n_2} \braket*{n_2}{V_j} - ... - \ket*{n_{j-1}} \braket*{n_{j-1}}{V_{j-1}} + \end{aligned}$$ + + This leaves only the part of $\ket*{V_j}$ which is orthogonal to + $\ket*{n_1}$, $\ket*{n_2}$, etc. This why the input vectors must be + linearly independent; otherwise $\ket{n_j'}$ may become zero at some + point. + +3. Normalize the resulting ortho*gonal* vector $\ket*{n_j'}$ to make it + ortho*normal*: + + $$\begin{aligned} + \ket*{n_j} = \frac{\ket*{n_j'}}{\sqrt{\braket*{n_j'}{n_j'}}} + \end{aligned}$$ + +4. Loop back to step 2, taking the next vector $\ket*{V_{j+1}}$. + +If you are unfamiliar with this notation, take a look at [Dirac notation](/know/concept/dirac-notation/). diff --git a/content/know/concept/hilbert-space/index.pdc b/content/know/concept/hilbert-space/index.pdc new file mode 100644 index 0000000..1faf08a --- /dev/null +++ b/content/know/concept/hilbert-space/index.pdc @@ -0,0 +1,202 @@ +--- +title: "Hilbert space" +firstLetter: "H" +publishDate: 2021-02-22 +categories: +- Mathematics +- Quantum mechanics + +date: 2021-02-22T21:36:24+01:00 +draft: false +markup: pandoc +--- + +# Hilbert space + +A **Hilbert space**, also known as an **inner product space**, is an +abstract **vector space** with a notion of length and angle. + + +## Vector space + +An abstract **vector space** $\mathbb{V}$ is a generalization of the +traditional concept of vectors as "arrows". It consists of a set of +objects called **vectors** which support the following (familiar) +operations: + ++ **Vector addition**: the sum of two vectors $V$ and $W$, denoted $V + W$. ++ **Scalar multiplication**: product of a vector $V$ with a scalar $a$, denoted $a V$. + +In addition, for a given $\mathbb{V}$ to qualify as a proper vector +space, these operations must obey the following axioms: + ++ **Addition is associative**: $U + (V + W) = (U + V) + W$ ++ **Addition is commutative**: $U + V = V + U$ ++ **Addition has an identity**: there exists a $\mathbf{0}$ such that $V + 0 = V$ ++ **Addition has an inverse**: for every $V$ there exists $-V$ so that $V + (-V) = 0$ ++ **Multiplication is associative**: $a (b V) = (a b) V$ ++ **Multiplication has an identity**: There exists a $1$ such that $1 V = V$ ++ **Multiplication is distributive over scalars**: $(a + b)V = aV + bV$ ++ **Multiplication is distributive over vectors**: $a (U + V) = a U + a V$ + +A set of $N$ vectors $V_1, V_2, ..., V_N$ is **linearly independent** if +the only way to satisfy the following relation is to set all the scalar coefficients $a_n = 0$: + +$$\begin{aligned} + \mathbf{0} = \sum_{n = 1}^N a_n V_n +\end{aligned}$$ + +In other words, these vectors cannot be expressed in terms of each +other. Otherwise, they would be **linearly dependent**. + +A vector space $\mathbb{V}$ has **dimension** $N$ if only up to $N$ of +its vectors can be linearly indepedent. All other vectors in +$\mathbb{V}$ can then be written as a **linear combination** of these $N$ **basis vectors**. + +Let $\vu{e}_1, ..., \vu{e}_N$ be the basis vectors, then any +vector $V$ in the same space can be **expanded** in the basis according to +the unique weights $v_n$, known as the **components** of $V$ +in that basis: + +$$\begin{aligned} + V = \sum_{n = 1}^N v_n \vu{e}_n +\end{aligned}$$ + +Using these, the vector space operations can then be implemented as follows: + +$$\begin{gathered} + V = \sum_{n = 1} v_n \vu{e}_n + \quad + W = \sum_{n = 1} w_n \vu{e}_n + \\ + \quad \implies \quad + V + W = \sum_{n = 1}^N (v_n + w_n) \vu{e}_n + \qquad + a V = \sum_{n = 1}^N a v_n \vu{e}_n +\end{gathered}$$ + + +## Inner product + +A given vector space $\mathbb{V}$ can be promoted to a **Hilbert space** +or **inner product space** if it supports an operation $\braket{U}{V}$ +called the **inner product**, which takes two vectors and returns a +scalar, and has the following properties: + ++ **Skew symmetry**: $\braket{U}{V} = (\braket{V}{U})^*$, where ${}^*$ is the complex conjugate. ++ **Positive semidefiniteness**: $\braket{V}{V} \ge 0$, and $\braket{V}{V} = 0$ if $V = \mathbf{0}$. ++ **Linearity in second operand**: $\braket{U}{(a V + b W)} = a \braket{U}{V} + b \braket{U}{W}$. + +The inner product describes the lengths and angles of vectors, and in +Euclidean space it is implemented by the dot product. + +The **magnitude** or **norm** $|V|$ of a vector $V$ is given by +$|V| = \sqrt{\braket{V}{V}}$ and represents the real positive length of $V$. +A **unit vector** has a norm of 1. + +Two vectors $U$ and $V$ are **orthogonal** if their inner product +$\braket{U}{V} = 0$. If in addition to being orthogonal, $|U| = 1$ and +$|V| = 1$, then $U$ and $V$ are known as **orthonormal** vectors. + +Orthonormality is desirable for basis vectors, so if they are +not already like that, it is common to manually turn them into a new +orthonormal basis using e.g. the [Gram-Schmidt method](/know/concept/gram-schmidt-method). + +As for the implementation of the inner product, it is given by: + +$$\begin{gathered} + V = \sum_{n = 1}^N v_n \vu{e}_n + \quad + W = \sum_{n = 1}^N w_n \vu{e}_n + \\ + \quad \implies \quad + \braket{V}{W} = \sum_{n = 1}^N \sum_{m = 1}^N v_n^* w_m \braket{\vu{e}_n}{\vu{e}_j} +\end{gathered}$$ + +If the basis vectors $\vu{e}_1, ..., \vu{e}_N$ are already +orthonormal, this reduces to: + +$$\begin{aligned} + \braket{V}{W} = \sum_{n = 1}^N v_n^* w_n +\end{aligned}$$ + +As it turns out, the components $v_n$ are given by the inner product +with $\vu{e}_n$, where $\delta_{nm}$ is the Kronecker delta: + +$$\begin{aligned} + \braket{\vu{e}_n}{V} = \sum_{m = 1}^N \delta_{nm} v_m = v_n +\end{aligned}$$ + + +## Infinite dimensions + +As the dimensionality $N$ tends to infinity, things may or may not +change significantly, depending on whether $N$ is **countably** or +**uncountably** infinite. + +In the former case, not much changes: the infinitely many **discrete** +basis vectors $\vu{e}_n$ can all still be made orthonormal as usual, +and as before: + +$$\begin{aligned} + V = \sum_{n = 1}^\infty v_n \vu{e}_n +\end{aligned}$$ + +A good example of such a countably-infinitely-dimensional basis are the +solution eigenfunctions of a [Sturm-Liouville problem](/know/concept/sturm-liouville-theory/). + +However, if the dimensionality is uncountably infinite, the basis +vectors are **continuous** and cannot be labeled by $n$. For example, all +complex functions $f(x)$ defined for $x \in [a, b]$ which +satisfy $f(a) = f(b) = 0$ form such a vector space. +In this case $f(x)$ is expanded as follows, where $x$ is a basis vector: + +$$\begin{aligned} + f(x) = \int_a^b \braket{x}{f} \dd{x} +\end{aligned}$$ + +Similarly, the inner product $\braket{f}{g}$ must also be redefined as +follows: + +$$\begin{aligned} + \braket{f}{g} = \int_a^b f^*(x) \: g(x) \dd{x} +\end{aligned}$$ + +The concept of orthonormality must be also weakened. A finite function +$f(x)$ can be normalized as usual, but the basis vectors $x$ themselves +cannot, since each represents an infinitesimal section of the real line. + +The rationale in this case is that action of the identity operator $\hat{I}$ must +be preserved, which is given here in [Dirac notation](/know/concept/dirac-notation/): + +$$\begin{aligned} + \hat{I} = \int_a^b \ket{\xi} \bra{\xi} \dd{\xi} +\end{aligned}$$ + +Applying the identity operator to $f(x)$ should just give $f(x)$ again: + +$$\begin{aligned} + f(x) = \braket{x}{f} = \matrixel{x}{\hat{I}}{f} + = \int_a^b \braket{x}{\xi} \braket{\xi}{f} \dd{\xi} + = \int_a^b \braket{x}{\xi} f(\xi) \dd{\xi} +\end{aligned}$$ + +Since we want the latter integral to reduce to $f(x)$, it is plain to see that +$\braket{x}{\xi}$ can only be a [Dirac delta function](/know/concept/dirac-delta-function/), +i.e $\braket{x}{\xi} = \delta(x - \xi)$: + +$$\begin{aligned} + \int_a^b \braket{x}{\xi} f(\xi) \dd{\xi} + = \int_a^b \delta(x - \xi) f(\xi) \dd{\xi} + = f(x) +\end{aligned}$$ + +Consequently, $\braket{x}{\xi} = 0$ if $x \neq \xi$ as expected for an +orthogonal set of basis vectors, but if $x = \xi$ the inner product +$\braket{x}{\xi}$ is infinite, unlike earlier. + +Technically, because the basis vectors $x$ cannot be normalized, they +are not members of a Hilbert space, but rather of a superset called a +**rigged Hilbert space**. Such vectors have no finite inner product with +themselves, but do have one with all vectors from the actual Hilbert +space. diff --git a/content/know/concept/legendre-transform/index.pdc b/content/know/concept/legendre-transform/index.pdc new file mode 100644 index 0000000..8a0d3e3 --- /dev/null +++ b/content/know/concept/legendre-transform/index.pdc @@ -0,0 +1,89 @@ +--- +title: "Legendre transform" +firstLetter: "L" +publishDate: 2021-02-22 +categories: +- Mathematics +- Physics + +date: 2021-02-22T21:36:35+01:00 +draft: false +markup: pandoc +--- + +# Legendre transform + +The **Legendre transform** of a function $f(x)$ is a new function $L(f')$, +which depends only on the derivative $f'(x)$ of $f(x)$, and from which +the original function $f(x)$ can be reconstructed. The point is, +analogously to other transforms (e.g. [Fourier](/know/concept/fourier-transform/)), +that $L(f')$ contains the same information as $f(x)$, just in a different form. + +Let us choose an arbitrary point $x_0 \in [a, b]$ in the domain of +$f(x)$. Consider a line $y(x)$ tangent to $f(x)$ at $x = x_0$, which has +a slope $f'(x_0)$ and intersects the $y$-axis at $-C$: + +$$\begin{aligned} + y(x) = f'(x_0) (x - x_0) + f(x_0) = f'(x_0) x - C +\end{aligned}$$ + +The Legendre transform $L(f')$ is defined such that $L(f'(x_0)) = C$ (or +sometimes $-C$ instead) for all $x_0 \in [a, b]$, where $C$ is the +constant corresponding to the tangent line at $x = x_0$. This yields: + +$$\begin{aligned} + L(f'(x)) = f'(x) \: x - f(x) +\end{aligned}$$ + +We want this function to depend only on the derivative $f'$, but +currently $x$ still appears here as a variable. We fix that problem in +the easiest possible way: by assuming that $f'(x)$ is invertible for all +$x \in [a, b]$. If $x(f')$ is the inverse of $f'(x)$, then $L(f')$ is +given by: + +$$\begin{aligned} + \boxed{ + L(f') = f' \: x(f') - f(x(f')) + } +\end{aligned}$$ + +The only requirement for the existence of the Legendre transform is thus +the invertibility of $f'(x)$ in the target interval $[a,b]$, which can +only be true if $f(x)$ is either convex or concave, i.e. its derivative +$f'(x)$ is monotonic. + +Crucially, the derivative of $L(f')$ with respect to $f'$ is simply +$x(f')$. In other words, the roles of $f'$ and $x$ are switched by the +transformation: the coordinate becomes the derivative and vice versa. +This is demonstrated here: + +$$\begin{aligned} + \boxed{ + \dv{L}{f'} = \dv{x}{f'} \: f' + x(f') - \dv{f}{x} \dv{x}{f'} = x(f') + } +\end{aligned}$$ + +Furthermore, Legendre transformation is an *involution*, meaning it is +its own inverse. Let $g(L')$ be the Legendre transform of $L(f')$: + +$$\begin{aligned} + g(L') = L' \: f'(L') - L(f'(L')) + = x(f') \: f' - f' \: x(f') + f(x(f')) = f(x) +\end{aligned}$$ + +Moreover, the inverse of a (forward) transform always exists, because +the Legendre transform of a convex function is itself convex. Convexity +of $f(x)$ means that $f''(x) > 0$ for all $x \in [a, b]$, which yields +the following proof: + +$$\begin{aligned} + L''(f') + = \dv{x(f')}{f'} + = \dv{x}{f'(x)} + = \frac{1}{f''(x)} + > 0 +\end{aligned}$$ + +Legendre transformation is important in physics, +since it connects Lagrangian and Hamiltonian mechanics to each other. +It is also used to convert between thermodynamic potentials. diff --git a/content/know/concept/parsevals-theorem/index.pdc b/content/know/concept/parsevals-theorem/index.pdc new file mode 100644 index 0000000..8f653f8 --- /dev/null +++ b/content/know/concept/parsevals-theorem/index.pdc @@ -0,0 +1,76 @@ +--- +title: "Parseval's theorem" +firstLetter: "P" +publishDate: 2021-02-22 +categories: +- Mathematics +- Physics + +date: 2021-02-22T21:36:44+01:00 +draft: false +markup: pandoc +--- + +# Parseval's theorem + +**Parseval's theorem** relates the inner product of two functions $f(x)$ and $g(x)$ to the +inner product of their [Fourier transforms](/know/concept/fourier-transform/) +$\tilde{f}(k)$ and $\tilde{g}(k)$. +There are two equivalent ways of stating it, +where $A$, $B$, and $s$ are constants from the Fourier transform's definition: + +$$\begin{aligned} + \boxed{ + \braket{f(x)}{g(x)} = \frac{2 \pi B^2}{|s|} \braket*{\tilde{f}(k)}{\tilde{g}(k)} + } + \\ + \boxed{ + \braket*{\tilde{f}(k)}{\tilde{g}(k)} = \frac{2 \pi A^2}{|s|} \braket{f(x)}{g(x)} + } +\end{aligned}$$ + +For this reason, physicists like to define their Fourier transform +with $A = B = 1 / \sqrt{2\pi}$ and $|s| = 1$, because then the FT nicely +conserves the total probability (quantum mechanics) or the total energy +(optics). + +To prove this, we insert the inverse FT into the inner product +definition: + +$$\begin{aligned} + \braket{f}{g} + &= \int_{-\infty}^\infty \big( \hat{\mathcal{F}}^{-1}\{\tilde{f}(k)\}\big)^* \: \hat{\mathcal{F}}^{-1}\{\tilde{g}(k)\} \dd{x} + \\ + &= B^2 \int + \Big( \int \tilde{f}^*(k_1) \exp(i s k_1 x) \dd{k_1} \Big) + \Big( \int \tilde{g}(k) \exp(- i s k x) \dd{k} \Big) + \dd{x} + \\ + &= 2 \pi B^2 \iint \tilde{f}^*(k_1) \tilde{g}(k) \Big( \frac{1}{2 \pi} \int_{-\infty}^\infty \exp(i s x (k_1 - k)) \dd{x} \Big) \dd{k_1} \dd{k} + \\ + &= 2 \pi B^2 \iint \tilde{f}^*(k_1) \: \tilde{g}(k) \: \delta(s (k_1 - k)) \dd{k_1} \dd{k} + \\ + &= \frac{2 \pi B^2}{|s|} \int_{-\infty}^\infty \tilde{f}^*(k) \: \tilde{g}(k) \dd{k} + = \frac{2 \pi B^2}{|s|} \braket*{\tilde{f}}{\tilde{g}} +\end{aligned}$$ + +Where $\delta(k)$ is the [Dirac delta function](/know/concept/dirac-delta-function/). +Note that we can just as well do it in the opposite direction, +which yields an equivalent result: + +$$\begin{aligned} + \braket*{\tilde{f}}{\tilde{g}} + &= \int_{-\infty}^\infty \big( \hat{\mathcal{F}}\{f(x)\}\big)^* \: \hat{\mathcal{F}}\{g(x)\} \dd{k} + \\ + &= A^2 \int + \Big( \int f^*(x_1) \exp(- i s k x_1) \dd{x_1} \Big) + \Big( \int g(x) \exp(i s k x) \dd{x} \Big) + \dd{k} + \\ + &= 2 \pi A^2 \iint f^*(x_1) g(x) \Big( \frac{1}{2 \pi} \int_{-\infty}^\infty \exp(i s k (x_1 - x)) \dd{k} \Big) \dd{x_1} \dd{x} + \\ + &= 2 \pi A^2 \iint f^*(x_1) \: g(x) \: \delta(s (x_1 - x)) \dd{x_1} \dd{x} + \\ + &= \frac{2 \pi A^2}{|s|} \int_{-\infty}^\infty f^*(x) \: g(x) \dd{x} + = \frac{2 \pi A^2}{|s|} \braket{f}{g} +\end{aligned}$$ diff --git a/content/know/concept/partial-fraction-decomposition/index.pdc b/content/know/concept/partial-fraction-decomposition/index.pdc new file mode 100644 index 0000000..1f4207f --- /dev/null +++ b/content/know/concept/partial-fraction-decomposition/index.pdc @@ -0,0 +1,60 @@ +--- +title: "Partial fraction decomposition" +firstLetter: "P" +publishDate: 2021-02-22 +categories: +- Mathematics + +date: 2021-02-22T21:36:56+01:00 +draft: false +markup: pandoc +--- + +# Partial fraction decomposition + +**Partial fraction decomposition** or **expansion** is a method to rewrite a +quotient of two polynomials $g(x)$ and $h(x)$, where the numerator +$g(x)$ is of lower order than $h(x)$, as a sum of fractions with $x$ in +the denominator: + +$$\begin{aligned} + f(x) = \frac{g(x)}{h(x)} = \frac{c_1}{x - h_1} + \frac{c_2}{x - h_2} + ... +\end{aligned}$$ + +Where $h_n$ etc. are the roots of the denominator $h(x)$. If all $N$ of +these roots are distinct, then it is sufficient to simply posit: + +$$\begin{aligned} + \boxed{ + f(x) = \frac{c_1}{x - h_1} + \frac{c_2}{x - h_2} + ... + \frac{c_N}{x - h_N} + } +\end{aligned}$$ + +The constants $c_n$ can either be found the hard way, +by multiplying the denominators around and solving a system of $N$ +equations, or the easy way by using this trick: + +$$\begin{aligned} + \boxed{ + c_n = \lim_{x \to h_n} \big( f(x) (x - h_n) \big) + } +\end{aligned}$$ + +If $h_1$ is a root with multiplicity $m > 1$, then the sum takes the form of: + +$$\begin{aligned} + \boxed{ + f(x) + = \frac{c_{1,1}}{x - h_1} + \frac{c_{1,2}}{(x - h_1)^2} + ... + } +\end{aligned}$$ + +Where $c_{1,j}$ are found by putting the terms on a common denominator, e.g. + +$$\begin{aligned} + \frac{c_{1,1}}{x - h_1} + \frac{c_{1,2}}{(x - h_1)^2} + = \frac{c_{1,1} (x - h_1) + c_{1,2}}{(x - h_1)^2} +\end{aligned}$$ + +And then, using the linear independence of $x^0, x^1, x^2, ...$, solving +a system of $m$ equations to find all $c_{1,1}, ..., c_{1,m}$. diff --git a/content/know/concept/pauli-exclusion-principle/index.pdc b/content/know/concept/pauli-exclusion-principle/index.pdc new file mode 100644 index 0000000..aa9609b --- /dev/null +++ b/content/know/concept/pauli-exclusion-principle/index.pdc @@ -0,0 +1,125 @@ +--- +title: "Pauli exclusion principle" +firstLetter: "P" +publishDate: 2021-02-22 +categories: +- Quantum mechanics +- Physics + +date: 2021-02-22T21:37:14+01:00 +draft: false +markup: pandoc +--- + +# Pauli exclusion principle + +In quantum mechanics, the **Pauli exclusion principle** is a theorem with +profound consequences for how the world works. + +Suppose we have a composite state +$\ket*{x_1}\ket*{x_2} = \ket*{x_1} \otimes \ket*{x_2}$, where the two +identical particles $x_1$ and $x_2$ each can occupy the same two allowed +states $a$ and $b$. We then define the permutation operator $\hat{P}$ as +follows: + +$$\begin{aligned} + \hat{P} \ket{a}\ket{b} = \ket{b}\ket{a} +\end{aligned}$$ + +That is, it swaps the states of the particles. Obviously, swapping the +states twice simply gives the original configuration again, so: + +$$\begin{aligned} + \hat{P}^2 \ket{a}\ket{b} = \ket{a}\ket{b} +\end{aligned}$$ + +Therefore, $\ket{a}\ket{b}$ is an eigenvector of $\hat{P}^2$ with +eigenvalue $1$. Since $[\hat{P}, \hat{P}^2] = 0$, $\ket{a}\ket{b}$ +must also be an eigenket of $\hat{P}$ with eigenvalue $\lambda$, +satisfying $\lambda^2 = 1$, so we know that $\lambda = 1$ or $\lambda = -1$: + +$$\begin{aligned} + \hat{P} \ket{a}\ket{b} = \lambda \ket{a}\ket{b} +\end{aligned}$$ + +As it turns out, in nature, each class of particle has a single +associated permutation eigenvalue $\lambda$, or in other words: whether +$\lambda$ is $-1$ or $1$ depends on the type of particle that $x_1$ +and $x_2$ are. Particles with $\lambda = -1$ are called +**fermions**, and those with $\lambda = 1$ are known as **bosons**. We +define $\hat{P}_f$ with $\lambda = -1$ and $\hat{P}_b$ with +$\lambda = 1$, such that: + +$$\begin{aligned} + \hat{P}_f \ket{a}\ket{b} = \ket{b}\ket{a} = - \ket{a}\ket{b} + \qquad + \hat{P}_b \ket{a}\ket{b} = \ket{b}\ket{a} = \ket{a}\ket{b} +\end{aligned}$$ + +Another fundamental fact of nature is that identical particles cannot be +distinguished by any observation. Therefore it is impossible to tell +apart $\ket{a}\ket{b}$ and the permuted state $\ket{b}\ket{a}$, +regardless of the eigenvalue $\lambda$. There is no physical difference! + +But this does not mean that $\hat{P}$ is useless: despite not having any +observable effect, the resulting difference between fermions and bosons +is absolutely fundamental. Consider the following superposition state, +where $\alpha$ and $\beta$ are unknown: + +$$\begin{aligned} + \ket{\Psi(a, b)} + = \alpha \ket{a}\ket{b} + \beta \ket{b}\ket{a} +\end{aligned}$$ + +When we apply $\hat{P}$, we can "choose" between two "intepretations" of +its action, both shown below. Obviously, since the left-hand sides are +equal, the right-hand sides must be equal too: + +$$\begin{aligned} + \hat{P} \ket{\Psi(a, b)} + &= \lambda \alpha \ket{a}\ket{b} + \lambda \beta \ket{b}\ket{a} + \\ + \hat{P} \ket{\Psi(a, b)} + &= \alpha \ket{b}\ket{a} + \beta \ket{a}\ket{b} +\end{aligned}$$ + +This gives us the equations $\lambda \alpha = \beta$ and +$\lambda \beta = \alpha$. In fact, just from this we could have deduced +that $\lambda$ can be either $-1$ or $1$. In any case, for bosons +($\lambda = 1$), we thus find that $\alpha = \beta$: + +$$\begin{aligned} + \ket{\Psi(a, b)}_b = C \big( \ket{a}\ket{b} + \ket{b}\ket{a} \big) +\end{aligned}$$ + +Where $C$ is a normalization constant. As expected, this state is +**symmetric**: switching $a$ and $b$ gives the same result. Meanwhile, for +fermions ($\lambda = -1$), we find that $\alpha = -\beta$: + +$$\begin{aligned} + \ket{\Psi(a, b)}_f = C \big( \ket{a}\ket{b} - \ket{b}\ket{a} \big) +\end{aligned}$$ + +This state is called **antisymmetric** under exchange: switching $a$ and $b$ +causes a sign change, as we would expect for fermions. + +Now, what if the particles $x_1$ and $x_2$ are in the same state $a$? +For bosons, we just need to update the normalization constant $C$: + +$$\begin{aligned} + \ket{\Psi(a, a)}_b + = C \ket{a}\ket{a} +\end{aligned}$$ + +However, for fermions, the state is unnormalizable and thus unphysical: + +$$\begin{aligned} + \ket{\Psi(a, a)}_f + = C \big( \ket{a}\ket{a} - \ket{a}\ket{a} \big) + = 0 +\end{aligned}$$ + +And this is the Pauli exclusion principle: **fermions may never +occupy the same quantum state**. One of the many notable consequences of +this is that the shells of atoms only fit a limited number of +electrons (which are fermions), since each must have a different quantum number. diff --git a/content/know/concept/probability-current/index.pdc b/content/know/concept/probability-current/index.pdc new file mode 100644 index 0000000..c67956a --- /dev/null +++ b/content/know/concept/probability-current/index.pdc @@ -0,0 +1,98 @@ +--- +title: "Probability current" +firstLetter: "P" +publishDate: 2021-02-22 +categories: +- Quantum mechanics +- Physics + +date: 2021-02-22T21:37:26+01:00 +draft: false +markup: pandoc +--- + +# Probability current + +In quantum mechanics, the **probability current** describes the movement +of the probability of finding a particle at given point in space. +In other words, it treats the particle as a heterogeneous fluid with density $|\psi|^2$. +Now, the probability of finding the particle within a volume $V$ is: + +$$\begin{aligned} + P = \int_{V} | \psi |^2 \dd[3]{\vec{r}} +\end{aligned}$$ + +As the system evolves in time, this probability may change, so we take +its derivative with respect to time $t$, and when necessary substitute +in the other side of the Schrödinger equation to get: + +$$\begin{aligned} + \pdv{P}{t} + &= \int_{V} \psi \pdv{\psi^*}{t} + \psi^* \pdv{\psi}{t} \dd[3]{\vec{r}} + = \frac{i}{\hbar} \int_{V} \psi (\hat{H} \psi^*) - \psi^* (\hat{H} \psi) \dd[3]{\vec{r}} + \\ + &= \frac{i}{\hbar} \int_{V} \psi \Big( \!-\! \frac{\hbar^2}{2 m} \nabla^2 \psi^* + V(\vec{r}) \psi^* \Big) + - \psi^* \Big( \!-\! \frac{\hbar^2}{2 m} \nabla^2 \psi + V(\vec{r}) \psi \Big) \dd[3]{\vec{r}} + \\ + &= \frac{i \hbar}{2 m} \int_{V} - \psi \nabla^2 \psi^* + \psi^* \nabla^2 \psi \dd[3]{\vec{r}} + = - \int_{V} \nabla \cdot \vec{J} \dd[3]{\vec{r}} +\end{aligned}$$ + +Where we have defined the probability current $\vec{J}$ as follows in +the $\vec{r}$-basis: + +$$\begin{aligned} + \vec{J} + = \frac{i \hbar}{2 m} (\psi \nabla \psi^* - \psi^* \nabla \psi) + = \mathrm{Re} \Big\{ \psi \frac{i \hbar}{m} \psi^* \Big\} +\end{aligned}$$ + +Let us rewrite this using the momentum operator +$\hat{p} = -i \hbar \nabla$ as follows, noting that $\hat{p} / m$ is +simply the velocity operator $\hat{v}$: + +$$\begin{aligned} + \boxed{ + \vec{J} + = \frac{1}{2 m} ( \psi^* \hat{p} \psi - \psi \hat{p} \psi^*) + = \mathrm{Re} \Big\{ \psi^* \frac{\hat{p}}{m} \psi \Big\} + = \mathrm{Re} \{ \psi^* \hat{v} \psi \} + } +\end{aligned}$$ + +Returning to the derivation of $\vec{J}$, we now have the following +equation: + +$$\begin{aligned} + \pdv{P}{t} + = \int_{V} \pdv{|\psi|^2}{t} \dd[3]{\vec{r}} + = - \int_{V} \nabla \cdot \vec{J} \dd[3]{\vec{r}} +\end{aligned}$$ + +By removing the integrals, we thus arrive at the **continuity equation** +for $\vec{J}$: + +$$\begin{aligned} + \boxed{ + \nabla \cdot \vec{J} + = - \pdv{|\psi|^2}{t} + } +\end{aligned}$$ + +This states that the total probability is conserved, and is reminiscent of charge +conservation in electromagnetism. In other words, the probability at a +point can only change by letting it "flow" towards or away from it. Thus +$\vec{J}$ represents the flow of probability, which is analogous to the +motion of a particle. + +As a bonus, this still holds for a particle in an electromagnetic vector +potential $\vec{A}$, thanks to the gauge invariance of the Schrödinger +equation. We can thus extend the definition to a particle with charge +$q$ in an SI-unit field, neglecting spin: + +$$\begin{aligned} + \boxed{ + \vec{J} + = \mathrm{Re} \Big\{ \psi^* \frac{\hat{p} - q \vec{A}}{m} \psi \Big\} + } +\end{aligned}$$ diff --git a/content/know/concept/slater-determinant/index.pdc b/content/know/concept/slater-determinant/index.pdc new file mode 100644 index 0000000..8bc4291 --- /dev/null +++ b/content/know/concept/slater-determinant/index.pdc @@ -0,0 +1,54 @@ +--- +title: "Slater determinant" +firstLetter: "S" +publishDate: 2021-02-22 +categories: +- Quantum mechanics +- Physics + +date: 2021-02-22T21:38:03+01:00 +draft: false +markup: pandoc +--- + +# Slater determinant + +In quantum mechanics, the **Slater determinant** is a trick +to create a many-particle wave function for a system of $N$ fermions, +with the necessary antisymmetry. + +Given an orthogonal set of individual states $\psi_n(x)$, we write +$\psi_n(x_n)$ to say that particle $x_n$ is in state $\psi_n$. Now the +goal is to find an expression for an overall many-particle wave +function $\Psi(x_1, ..., x_N)$ that satisfies the +[Pauli exclusion principle](/know/concept/pauli-exclusion-principle/). +Enter the Slater determinant: + +$$\begin{aligned} + \boxed{ + \Psi(x_1, ..., x_N) + = \frac{1}{\sqrt{N!}} \det\! + \begin{bmatrix} + \psi_1(x_1) & \cdots & \psi_N(x_1) \\ + \vdots & \ddots & \vdots \\ + \psi_1(x_N) & \cdots & \psi_N(x_N) + \end{bmatrix} + }\end{aligned}$$ + +Swapping the state of two particles corresponds to exchanging two rows, +which flips the sign of the determinant. +Similarly, switching two columns means swapping two states, +which also results in a sign change. +Finally, putting two particles into the same state makes $\Psi$ vanish. + +Not all valid many-fermion wave functions can be +written as a single Slater determinant; a linear combination of multiple +may be needed. Nevertheless, an appropriate choice of the input set +$\psi_n(x)$ can optimize how well a single determinant approximates a +given $\Psi$. + +In fact, there exists a similar trick for bosons, where the goal is to +create a symmetric wave function which allows multiple particles to +occupy the same state. In this case, one needs to take the **Slater +permanent** of the same matrix, which is simply the determinant, but with +all minuses replaced by pluses. diff --git a/content/know/concept/sturm-liouville-theory/index.pdc b/content/know/concept/sturm-liouville-theory/index.pdc new file mode 100644 index 0000000..7ccd625 --- /dev/null +++ b/content/know/concept/sturm-liouville-theory/index.pdc @@ -0,0 +1,346 @@ +--- +title: "Sturm-Liouville theory" +firstLetter: "S" +publishDate: 2021-02-23 +categories: +- Mathematics +- Physics + +date: 2021-02-23T08:52:28+01:00 +draft: false +markup: pandoc +--- + +# Sturm-Liouville theory + +**Sturm-Liouville theory** defines the analogue of Hermitian matrix +eigenvalue problems for linear second-order ODEs. + +It states that, given suitable boundary conditions, any linear +second-order ODE can be rewritten using the **Sturm-Liouville operator**, +and that the corresponding eigenvalue problem, known as a +**Sturm-Liouville problem**, will give real eigenvalues and a complete set +of eigenfunctions. + + +## General operator + +Consider the most general form of a second-order linear +differential operator $\hat{L}$, where $p_0(x)$, $p_1(x)$, and $p_2(x)$ +are real functions of $x \in [a,b]$ which are non-zero for all $x \in ]a, b[$: + +$$\begin{aligned} + \hat{L} \{u(x)\} = p_0(x) u''(x) + p_1(x) u'(x) + p_2(x) u(x) +\end{aligned}$$ + +We now define the **adjoint** or **Hermitian** operator +$\hat{L}^\dagger$ analogously to matrices: + +$$\begin{aligned} + \braket*{f}{\hat{L} g} + = \braket*{\hat{L}^\dagger f}{g} +\end{aligned}$$ + +What is $\hat{L}^\dagger$, given the above definition of $\hat{L}$? +We start from the inner product $\braket*{f}{\hat{L} g}$: + +$$\begin{aligned} + \braket*{f}{\hat{L} g} + &= \int_a^b f^*(x) \hat{L}\{g(x)\} \dd{x} + = \int_a^b (f^* p_0) g'' + (f^* p_1) g' + (f^* p_2) g \dd{x} + \\ + &= \big[ (f^* p_0) g' + (f^* p_1) g \big]_a^b - \int_a^b (f^* p_0)' g' + (f^* p_1)' g - (f^* p_2) g \dd{x} + \\ + &= \big[ f^* \big( p_0 g' \!+\! p_1 g \big) \!-\! (f^* p_0)' g \big]_a^b + \int_a^b \! \big( (f p_0)'' - (f p_1)' + (f p_2) \big)^* g \dd{x} + \\ + &= \big[ f^* \big( p_0 g' + (p_1 - p_0') g \big) - (f^*)' p_0 g \big]_a^b + \int_a^b \big( \hat{L}^\dagger\{f\} \big)^* g \dd{x} +\end{aligned}$$ + +We now have an expression for $\hat{L}^\dagger$, but are left with an +annoying boundary term: + +$$\begin{aligned} + \braket*{f}{\hat{L} g} + &= \big[ f^* \big( p_0 g' + (p_1 - p_0') g \big) - (f^*)' p_0 g \big]_a^b + \braket*{\hat{L}^\dagger f}{g} +\end{aligned}$$ + +To fix this, +let us demand that $p_1(x) = p_0'(x)$ and that +$[p_0(f^* g' - (f^*)' g)]_a^b = 0$, leaving: + +$$\begin{aligned} + \braket*{f}{\hat{L} g} + &= \big[ p_0 \big( f^* g' - (f^*)' g \big) \big]_a^b + \braket{\hat{L}^\dagger f}{g} + = \braket*{\hat{L}^\dagger f}{g} +\end{aligned}$$ + +Using the aforementioned restriction $p_1(x) = p_0'(x)$, +we then take a look at the definition of $\hat{L}^\dagger$: + +$$\begin{aligned} + \hat{L}^\dagger \{f\} + &= (p_0 f)'' - (p_1 f)' + (p_2 f) + \\ + &= p_0 f'' + (2 p_0' - p_1) f' + (p_0'' - p_1' + p_2) f + \\ + &= p_0 f'' + p_0' f' + p_2 f + \\ + &= (p_0 f')' + p_2 f +\end{aligned}$$ + +The original operator $\hat{L}$ reduces to the same form, +so it is **self-adjoint**: + +$$\begin{aligned} + \hat{L} \{f\} + &= p_0 f'' + p_0' f' + p_2 f + = (p_0 f')' + p_2 f + = \hat{L}^\dagger \{f\} +\end{aligned}$$ + +Consequently, every such second-order linear operator $\hat{L}$ is self-adjoint, +as long as it satisfies the constraints $p_1(x) = p_0'(x)$ and $[p_0 (f^* g' - (f^*)' g)]_a^b = 0$. + +Let us ignore the latter constraint for now (it will return later), +and focus on the former: what if $\hat{L}$ does not satisfy $p_0' \neq p_1$? +We multiply it by an unknown $p(x) \neq 0$, and divide by $p_0(x) \neq 0$: + +$$\begin{aligned} + \frac{p(x)}{p_0(x)} \hat{L} \{u\} = p(x) u'' + p(x) \frac{p_1(x)}{p_0(x)} u' + p(x) \frac{p_2(x)}{p_0(x)} u +\end{aligned}$$ + +We now define $q(x)$, +and demand that the derivative $p'(x)$ of the unknown $p(x)$ satisfies: + +$$\begin{aligned} + q(x) = p(x) \frac{p_2(x)}{p_0(x)} + \qquad + p'(x) = p(x) \frac{p_1(x)}{p_0(x)} +\end{aligned}$$ + +The latter is a differential equation for $p(x)$, which we solve by integration: + +$$\begin{gathered} + \frac{p_1(x)}{p_0(x)} = \frac{1}{p(x)} \dv{p}{x} + \quad \implies \quad + \frac{p_1(x)}{p_0(x)} \dd{x} = \frac{1}{p(x)} \dd{p} + \\ + \implies \quad + \int_a^x \frac{p_1(\xi)}{p_0(\xi)} \dd{\xi} = \int_{p(a)}^{p(x)} \frac{1}{f} \dd{f} + = \ln\Big( \frac{p(x)}{p(a)} \Big) + \\ + \implies \quad + p(x) = p(a) \exp\!\Big( \int_a^x \frac{p_1(\xi)}{p_0(\xi)} \dd{\xi} \Big) +\end{gathered}$$ + +Now that we have $p(x)$ and $q(x)$, we can define a new operator $\hat{L}_p$ as follows: + +$$\begin{aligned} + \hat{L}_p \{u\} + = \frac{p}{p_0} \hat{L} \{u\} + = p u'' + p' u' + q u + = (p u')' + q u +\end{aligned}$$ + +This is the self-adjoint form from earlier! +So even if $p_0' \neq p_1$, any second-order linear operator with $p_0(x) \neq 0$ +can easily be put in self-adjoint form. + +This general form is known as the **Sturm-Liouville operator** $\hat{L}_{SL}$, +where $p(x)$ and $q(x)$ are non-zero real functions of the variable $x \in [a,b]$: + +$$\begin{aligned} + \boxed{ + \hat{L}_{SL} \{u(x)\} + = \frac{d}{dx}\Big( p(x) \frac{du}{dx} \Big) + q(x) u(x) + = \hat{L}_{SL}^\dagger \{u(x)\} + } +\end{aligned}$$ + + +## Eigenvalue problem + +A **Sturm-Liouville problem** (SLP) is analogous to a matrix eigenvalue problem, +where $w(x)$ is a real weight function, $\lambda$ is the **eigenvalue**, +and $u(x)$ is the corresponding **eigenfunction**: + +$$\begin{aligned} + \boxed{ + \hat{L}_{SL}\{u(x)\} = - \lambda w(x) u(x) + } +\end{aligned}$$ + +Necessarily, $w(x) > 0$ except in isolated points, where $w(x) = 0$ is allowed; +the point is that any inner product $\braket{f}{w g}$ may never be zero due to $w$'s fault. +Furthermore, the convention is that $u(x)$ cannot be trivially zero. + +In our derivation of $\hat{L}_{SL}$, +we removed a boundary term to get self-adjointness. +Consequently, to have a valid SLP, the boundary conditions for +$u(x)$ must be as follows, otherwise the operator cannot be self-adjoint: + +$$\begin{aligned} + \Big[ p(x) \big( u^*(x) u'(x) - (u'(x))^* u(x) \big) \Big]_a^b = 0 +\end{aligned}$$ + +There are many boundary conditions (BCs) which satisfy this requirement. +Some notable ones are listed here non-exhaustively: + ++ **Dirichlet BCs**: $u(a) = u(b) = 0$ ++ **Neumann BCs**: $u'(a) = u'(b) = 0$ ++ **Robin BCs**: $\alpha_1 u(a) + \beta_1 u'(a) = \alpha_2 u(b) + \beta_2 u'(b) = 0$ with $\alpha_{1,2}, \beta_{1,2} \in \mathbb{R}$ ++ **Periodic BCs**: $p(a) = p(b)$, $u(a) = u(b)$, and $u'(a) = u'(b)$ ++ **Legendre "BCs"**: $p(a) = p(b) = 0$ + +Once this requirement is satisfied, Sturm-Liouville theory gives us +some very useful information about $\lambda$ and $u(x)$. +From the definition of an SLP, we know that, given two arbitrary (and possibly identical) +eigenfunctions $u_n$ and $u_m$, the following must be satisfied: + +$$\begin{aligned} + 0 = \hat{L}_{SL}\{u_n\} + \lambda_n w u_n = \hat{L}_{SL}\{u_m^*\} + \lambda_m^* w u_m^* +\end{aligned}$$ + +We subtract these expressions, multiply by the eigenfunctions, and integrate: + +$$\begin{aligned} + 0 + &= \int_a^b u_m^* \big(\hat{L}_{SL}\{u_n\} + \lambda_n w u_n\big) - u_n \big(\hat{L}_{SL}\{u_m^*\} + \lambda_m^* w u_m^*\big) \:dx + \\ + &= \int_a^b u_m^* \hat{L}_{SL}\{u_n\} - u_n \hat{L}_{SL}\{u_m^*\} + u_n u_m^* w (\lambda_n - \lambda_m^*) \:dx +\end{aligned}$$ + +Rearranging this a bit reveals that these are in fact three inner products: + +$$\begin{aligned} + \int_a^b u_m^* \hat{L}_{SL}\{u_n\} - u_n \hat{L}_{SL}\{u_m^*\} \:dx + &= (\lambda_m^* - \lambda_n) \int_a^b u_n u_m^* w \:dx + \\ + \braket*{u_m}{\hat{L}_{SL} u_n} - \braket*{\hat{L}_{SL} u_m}{u_n} + &= (\lambda_m^* - \lambda_n) \braket{u_m}{w u_n} +\end{aligned}$$ + +The operator $\hat{L}_{SL}$ is self-adjoint by definition, +so the left-hand side vanishes, leaving us with: + +$$\begin{aligned} + 0 + &= (\lambda_m^* - \lambda_n) \braket{u_m}{w u_n} +\end{aligned}$$ + +When $m = n$, the inner product $\braket{u_n}{w u_n}$ is real and positive +(assuming $u_n$ is not trivially zero, in which case it would be disqualified anyway). +In this case we thus know that $\lambda_n^* = \lambda_n$, +i.e. the eigenvalue $\lambda_n$ is real for any $n$. + +When $m \neq n$, then $\lambda_m^* - \lambda_n$ may or may not be zero, +depending on the degeneracy. If there is no degeneracy, we +see that $\braket{u_m}{w u_n} = 0$, i.e. the eigenfunctions are orthogonal. + +In case of degeneracy, manual orthogonalization is needed, but as it turns out, +this is guaranteed to be doable, using e.g. the [Gram-Schmidt method](/know/concept/gram-schmidt-method/). + +In conclusion, **a Sturm-Liouville problem has real eigenvalues $\lambda$, +and all the corresponding eigenfunctions $u(x)$ are mutually orthogonal**: + +$$\begin{aligned} + \boxed{ + \braket{u_m(x)}{w(x) u_n(x)} + = \braket{u_n}{w u_n} \delta_{nm} + = A_n \delta_{nm} + } +\end{aligned}$$ + +When you're solving a differential eigenvalue problem, +knowing that all eigenvalues are real is a *huge* simplification, +so it is always worth checking whether you're dealing with an SLP. + +Another useful fact of SLPs is that they always +have an infinite number of discrete eigenvalues. +Furthermore, the eigenvalues always ascend to $+\infty$; +in other words, there always exists a *lowest* eigenvalue $\lambda_0 > -\infty$, +known as the **ground state**. + + +## Completeness + +Not only are the eigenfunctions $u_n(x)$ of an SLP orthogonal, they +also form a **complete basis**, meaning that any well-behaved function $f(x)$ can be +expanded as a **generalized Fourier series** with coefficients $a_n$: + +$$\begin{aligned} + \boxed{ + f(x) + = \sum_{n = 0}^\infty a_n u_n(x) + \quad \mathrm{for}\: x \in ]a, b[ + } +\end{aligned}$$ + +This series will converge significantly faster if $f(x)$ +satisfies the same BCs as $u_n(x)$. In that case the +expansion will even be valid for the inclusive interval $x \in [a, b]$. + +To find an expression for the coefficients $a_n$, +we multiply the above generalized Fourier series by $w(x) u_m^*(x)$ for an arbitrary $m$: + +$$\begin{aligned} + f(x) w(x) u_m^*(x) + &= \sum_{n = 0}^\infty a_n u_n(x) w(x) u_m^*(x) +\end{aligned}$$ + +By integrating we get inner products on both the left and the right: + +$$\begin{aligned} + \int_a^b f(x) w(x) u_m^*(x) \dd{x} + &= \int_a^b \Big(\sum_{n = 0}^\infty a_n u_n(x) w(x) u_m^*(x)\Big) \dd{x} + \\ + \braket{u_m}{w f} + &= \sum_{n = 0}^\infty a_n \braket{u_m}{w u_n} +\end{aligned}$$ + +Because the eigenfunctions of an SLP are mutually orthogonal, +the summation disappears: + +$$\begin{aligned} + \braket{u_m}{w f} + &= \sum_{n = 0}^\infty a_n \braket{u_m}{w u_n} + = \sum_{n = 0}^\infty a_n A_n \delta_{nm} + = a_m A_m +\end{aligned}$$ + +After isolating this for $a_n$, we see that +the coefficients are given by the projection of the target +function $f(x)$ onto the normalized eigenfunctions $u_n(x) / A_n$: + +$$\begin{aligned} + \boxed{ + a_n + = \frac{\braket{u_n}{w f}}{A_n} + = \frac{\braket{u_n}{w f}}{\braket{u_n}{w u_n}} + } +\end{aligned}$$ + +As a final remark, we can see something interesting +by rearranging the generalized Fourier series +after inserting the expression for $a_n$: + +$$\begin{aligned} + f(x) + &= \sum_{n = 0}^\infty \frac{1}{A_n} \braket{u_n}{w f} u_n(x) + = \int_a^b \Big(\sum_{n = 0}^\infty \frac{1}{A_n} u_n^*(\xi) w(\xi) f(\xi) u_n(x) \Big) \dd{\xi} + \\ + &= \int_a^b f(\xi) \Big(\sum_{n = 0}^\infty \frac{1}{A_n} u_n^*(\xi) w(\xi) u_n(x) \Big) \dd{\xi} + %= \int_a^b f(\xi) \delta(x - \xi) \dd{\xi} +\end{aligned}$$ + +Upon closer inspection, the parenthesized summation +must be the [Dirac delta function](/know/concept/dirac-delta-function/) $\delta(x)$ +for the integral to work out. +This is in fact the underlying requirement for completeness: + +$$\begin{aligned} + \boxed{ + \sum_{n = 0}^\infty \frac{1}{A_n} u_n^*(\xi) w(\xi) u_n(x) = \delta(x - \xi) + } +\end{aligned}$$ + diff --git a/content/know/concept/time-independent-perturbation-theory/index.pdc b/content/know/concept/time-independent-perturbation-theory/index.pdc new file mode 100644 index 0000000..4f30ae8 --- /dev/null +++ b/content/know/concept/time-independent-perturbation-theory/index.pdc @@ -0,0 +1,329 @@ +--- +title: "Time-independent perturbation theory" +firstLetter: "T" +publishDate: 2021-02-22 +categories: +- Quantum mechanics +- Physics + +date: 2021-02-22T21:38:18+01:00 +draft: false +markup: pandoc +--- + +# Time-independent perturbation theory + +**Time-independent perturbation theory**, sometimes also called +**stationary state perturbation theory**, is a specific application of +perturbation theory to the time-independent Schrödinger +equation in quantum physics, for +Hamiltonians of the following form: + +$$\begin{aligned} + \hat{H} = \hat{H}_0 + \lambda \hat{H}_1 +\end{aligned}$$ + +Where $\hat{H}_0$ is a Hamiltonian for which the time-independent +Schrödinger equation has a known solution, and $\hat{H}_1$ is a small +perturbing Hamiltonian. The eigenenergies $E_n$ and eigenstates +$\ket{\psi_n}$ of the composite problem are expanded in the +perturbation "bookkeeping" parameter $\lambda$: + +$$\begin{aligned} + \ket{\psi_n} + &= \ket*{\psi_n^{(0)}} + \lambda \ket*{\psi_n^{(1)}} + \lambda^2 \ket*{\psi_n^{(2)}} + ... + \\ + E_n + &= E_n^{(0)} + \lambda E_n^{(1)} + \lambda^2 E_n^{(2)} + ... +\end{aligned}$$ + +Where $E_n^{(1)}$ and $\ket*{\psi_n^{(1)}}$ are called the **first-order +corrections**, and so on for higher orders. We insert this into the +Schrödinger equation: + +$$\begin{aligned} + \hat{H} \ket{\psi_n} + &= \hat{H}_0 \ket*{\psi_n^{(0)}} + + \lambda \big( \hat{H}_1 \ket*{\psi_n^{(0)}} + \hat{H}_0 \ket*{\psi_n^{(1)}} \big) \\ + &\qquad + \lambda^2 \big( \hat{H}_1 \ket*{\psi_n^{(1)}} + \hat{H}_0 \ket*{\psi_n^{(2)}} \big) + ... + \\ + E_n \ket{\psi_n} + &= E_n^{(0)} \ket*{\psi_n^{(0)}} + + \lambda \big( E_n^{(1)} \ket*{\psi_n^{(0)}} + E_n^{(0)} \ket*{\psi_n^{(1)}} \big) \\ + &\qquad + \lambda^2 \big( E_n^{(2)} \ket*{\psi_n^{(0)}} + E_n^{(1)} \ket*{\psi_n^{(1)}} + E_n^{(0)} \ket*{\psi_n^{(2)}} \big) + ... +\end{aligned}$$ + +If we collect the terms according to the order of $\lambda$, we arrive +at the following endless series of equations, of which in practice only +the first three are typically used: + +$$\begin{aligned} + \hat{H}_0 \ket*{\psi_n^{(0)}} + &= E_n^{(0)} \ket*{\psi_n^{(0)}} + \\ + \hat{H}_1 \ket*{\psi_n^{(0)}} + \hat{H}_0 \ket*{\psi_n^{(1)}} + &= E_n^{(1)} \ket*{\psi_n^{(0)}} + E_n^{(0)} \ket*{\psi_n^{(1)}} + \\ + \hat{H}_1 \ket*{\psi_n^{(1)}} + \hat{H}_0 \ket*{\psi_n^{(2)}} + &= E_n^{(2)} \ket*{\psi_n^{(0)}} + E_n^{(1)} \ket*{\psi_n^{(1)}} + E_n^{(0)} \ket*{\psi_n^{(2)}} + \\ + ... + &= ... +\end{aligned}$$ + +The first equation is the unperturbed problem, which we assume has +already been solved, with eigenvalues $E_n^{(0)} = \varepsilon_n$ and +eigenvectors $\ket*{\psi_n^{(0)}} = \k