summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPrefetch2021-03-03 18:03:22 +0100
committerPrefetch2021-03-03 18:03:22 +0100
commitbcf2e9b649425d2df16b64752c4396a07face7ea (patch)
tree3235310187b56d5b2e2cb4ae4a4b18e4e7e28e02
parent20e7c96c35b922252e17fd5fc9ff0407d9bd30ca (diff)
Expand knowledge base
-rw-r--r--content/know/category/numerical-methods.md9
-rw-r--r--content/know/concept/density-operator/index.pdc131
-rw-r--r--content/know/concept/lagrange-multiplier/index.pdc103
-rw-r--r--content/know/concept/pulay-mixing/index.pdc158
-rw-r--r--content/know/concept/second-quantization/index.pdc332
5 files changed, 733 insertions, 0 deletions
diff --git a/content/know/category/numerical-methods.md b/content/know/category/numerical-methods.md
new file mode 100644
index 0000000..5455fee
--- /dev/null
+++ b/content/know/category/numerical-methods.md
@@ -0,0 +1,9 @@
+---
+title: "Numerical methods"
+firstLetter: "N"
+date: 2021-02-26T16:00:51+01:00
+draft: false
+layout: "category"
+---
+
+This page will fill itself.
diff --git a/content/know/concept/density-operator/index.pdc b/content/know/concept/density-operator/index.pdc
new file mode 100644
index 0000000..84c2d74
--- /dev/null
+++ b/content/know/concept/density-operator/index.pdc
@@ -0,0 +1,131 @@
+---
+title: "Density operator"
+firstLetter: "D"
+publishDate: 2021-03-03
+categories:
+- Physics
+- Quantum mechanics
+
+date: 2021-03-03T09:07:51+01:00
+draft: false
+markup: pandoc
+---
+
+# Density operator
+
+In quantum mechanics, the expectation value of an observable
+$\expval*{\hat{L}}$ represents the average result from measuring
+$\hat{L}$ on a large number of systems (an **ensemble**)
+prepared in the same state $\ket{\Psi}$,
+known as a **pure ensemble** or (somewhat confusingly) **pure state**.
+
+But what if the systems of the ensemble are not all in the same state?
+To work with such a **mixed ensemble** or **mixed state**,
+the **density operator** $\hat{\rho}$ or **density matrix** (in a basis) is useful.
+It is defined as follows, where $p_n$ is the probability
+that the system is in state $\ket{\Psi_n}$,
+i.e. the proportion of systems in the ensemble that are
+in state $\ket{\Psi_n}$:
+
+$$\begin{aligned}
+ \boxed{
+ \hat{\rho}
+ = \sum_{n} p_n \ket{\Psi_n} \bra{\Psi_n}
+ }
+\end{aligned}$$
+
+Do not let is this form fool you into thinking that $\hat{\rho}$ is diagonal:
+$\ket{\Psi_n}$ need not be basis vectors.
+Instead, the matrix elements of $\hat{\rho}$ are found as usual,
+where $\ket{j}$ and $\ket{k}$ are basis vectors:
+
+$$\begin{aligned}
+ \matrixel{j}{\hat{\rho}}{k}
+ = \sum_{n} p_n \braket{j}{\Psi_n} \braket{\Psi_n}{k}
+\end{aligned}$$
+
+However, from the special case where $\ket{\Psi_n}$ are indeed basis vectors,
+we can conclude that $\hat{\rho}$ is Hermitian,
+and that its trace (i.e. the total probability) is 100%:
+
+$$\begin{gathered}
+ \boxed{
+ \hat{\rho}^\dagger = \hat{\rho}
+ }
+ \qquad \qquad
+ \boxed{
+ \mathrm{Tr}(\hat{\rho}) = 1
+ }
+\end{gathered}$$
+
+These properties are preserved by all changes of basis.
+If the ensemble is purely $\ket{\Psi}$,
+then $\hat{\rho}$ is given by a single state vector:
+
+$$\begin{aligned}
+ \hat{\rho} = \ket{\Psi} \bra{\Psi}
+\end{aligned}$$
+
+From the special case where $\ket{\Psi}$ is a basis vector,
+we can conclude that for a pure ensemble,
+$\hat{\rho}$ is idempotent, which means that:
+
+$$\begin{aligned}
+ \hat{\rho}^2 = \hat{\rho}
+\end{aligned}$$
+
+This can be used to find out whether a given $\hat{\rho}$
+represents a pure or mixed ensemble.
+
+Next, we define the ensemble average $\expval*{\expval*{\hat{L}}}$
+as the mean of the expectation values for states in the ensemble,
+which can be calculated like so:
+
+$$\begin{aligned}
+ \boxed{
+ \expval*{\expval*{\hat{L}}}
+ = \sum_{n} p_n \matrixel{\Psi_n}{\hat{L}}{\Psi_n}
+ = \mathrm{Tr}(\hat{L} \hat{\rho})
+ }
+\end{aligned}$$
+
+To prove the latter,
+we write out the trace $\mathrm{Tr}$ as the sum of the diagonal elements, so:
+
+$$\begin{aligned}
+ \mathrm{Tr}(\hat{L} \hat{\rho})
+ &= \sum_{j} \matrixel{j}{\hat{L} \hat{\rho}}{j}
+ = \sum_{j} \sum_{n} p_n \matrixel{j}{\hat{L}}{\Psi_n} \braket{\Psi_n}{j}
+ \\
+ &= \sum_{n} \sum_{j} p_n \braket{\Psi_n}{j} \matrixel{j}{\hat{L}}{\Psi_n}
+ = \sum_{n} p_n \matrixel{\Psi_n}{\hat{I} \hat{L}}{\Psi_n}
+ = \expval*{\expval*{\hat{L}}}
+\end{aligned}$$
+
+In both the pure and mixed cases,
+if the state probabilities $p_n$ are constant with respect to time,
+then the evolution of the ensemble obeys the **Von Neumann equation**:
+
+$$\begin{aligned}
+ \boxed{
+ i \hbar \dv{\hat{\rho}}{t} = [\hat{H}, \hat{\rho}]
+ }
+\end{aligned}$$
+
+This equivalent to the Schrödinger equation:
+one can be derived from the other.
+We differentiate $\hat{\rho}$ with the product rule,
+and then substitute the opposite side of the Schrödinger equation:
+
+$$\begin{aligned}
+ i \hbar \dv{\hat{\rho}}{t}
+ &= i \hbar \dv{t} \sum_n p_n \ket{\Psi_n} \bra{\Psi_n}
+ \\
+ &= \sum_n p_n \Big( i \hbar \dv{t} \ket{\Psi_n} \Big) \bra{\Psi_n} + \sum_n p_n \ket{\Psi_n} \Big( i \hbar \dv{t} \bra{\Psi_n} \Big)
+ \\
+ &= \sum_n p_n \ket*{\hat{H} n} \bra{n} - \sum_n p_n \ket{n} \bra*{\hat{H} n}
+ = \hat{H} \hat{\rho} - \hat{\rho} \hat{H}
+ = [\hat{H}, \hat{\rho}]
+\end{aligned}$$
+
+
diff --git a/content/know/concept/lagrange-multiplier/index.pdc b/content/know/concept/lagrange-multiplier/index.pdc
new file mode 100644
index 0000000..2b14897
--- /dev/null
+++ b/content/know/concept/lagrange-multiplier/index.pdc
@@ -0,0 +1,103 @@
+---
+title: "Lagrange multiplier"
+firstLetter: "L"
+publishDate: 2021-03-02
+categories:
+- Mathematics
+- Physics
+
+date: 2021-03-02T16:28:42+01:00
+draft: false
+markup: pandoc
+---
+
+# Lagrange multiplier
+
+The method of **Lagrange multipliers** or **undetermined multipliers**
+is a technique for optimizing (i.e. finding the extrema of)
+a function $f(x, y, z)$,
+subject to a given constraint $\phi(x, y, z) = C$,
+where $C$ is a constant.
+
+If we ignore the constraint $\phi$,
+optimizing $f$ simply comes down to finding stationary points:
+
+$$\begin{aligned}
+ 0 &= \dd{f} = f_x \dd{x} + f_y \dd{y} + f_z \dd{z}
+\end{aligned}$$
+
+This problem is easy:
+$\dd{x}$, $\dd{y}$, and $\dd{z}$ are independent and arbitrary,
+so all we need to do is find the roots of
+the partial derivatives $f_x$, $f_y$ and $f_z$,
+which we respectively call $x_0$, $y_0$ and $z_0$,
+and then the extremum is simply $(x_0, y_0, z_0)$.
+
+But the constraint $\phi$, over which we have no control,
+adds a relation between $\dd{x}$, $\dd{y}$, and $\dd{z}$,
+so if two are known, the third is given by $\phi = C$.
+The problem is then a system of equations:
+
+$$\begin{aligned}
+ 0 &= \dd{f} = f_x \dd{x} + f_y \dd{y} + f_z \dd{z}
+ \\
+ 0 &= \dd{\phi} = \phi_x \dd{x} + \phi_y \dd{y} + \phi_z \dd{z}
+\end{aligned}$$
+
+Solving this directly would be a delicate balancing act
+of all the partial derivatives.
+
+To help us solve this, we introduce a "dummy" parameter $\lambda$,
+the so-called **Lagrange multiplier**, and contruct a new function $L$ given by:
+
+$$\begin{aligned}
+ L(x, y, z) = f(x, y, z) + \lambda \phi(x, y, z)
+\end{aligned}$$
+
+Clearly, $\dd{L} = \dd{f} + \lambda \dd{\phi} = 0$,
+so now the problem is a single equation again:
+
+$$\begin{aligned}
+ 0 = \dd{L}
+ = (f_x + \lambda \phi_x) \dd{x} + (f_y + \lambda \phi_y) \dd{y} + (f_z + \lambda \phi_z) \dd{z}
+\end{aligned}$$
+
+Assuming $\phi_z \neq 0$, we now choose $\lambda$ such that $f_z + \lambda \phi_z = 0$.
+This choice represents satisfying the constraint,
+so now the remaining $\dd{x}$ and $\dd{y}$ are independent again,
+and we simply have to find the roots of $f_x + \lambda \phi_x$ and $f_y + \lambda \phi_y$.
+
+This generalizes nicely to multiple constraints or more variables:
+suppose that we want to find the extrema of $f(x_1, ..., x_N)$
+subject to $M < N$ conditions:
+
+$$\begin{aligned}
+ \phi_1(x_1, ..., x_N) = C_1 \qquad \cdots \qquad \phi_M(x_1, ..., x_N) = C_M
+\end{aligned}$$
+
+This once again turns into a delicate system of $M+1$ equations to solve:
+
+$$\begin{aligned}
+ 0 &= \dd{f} = f_{x_1} \dd{x_1} + ... + f_{x_N} \dd{x_N}
+ \\
+ 0 &= \dd{\phi_1} = \phi_{1, x_1} \dd{x_1} + ... + \phi_{1, x_N} \dd{x_N}
+ \\
+ &\vdots
+ \\
+ 0 &= \dd{\phi_M} = \phi_{M, x_1} \dd{x_1} + ... + \phi_{M, x_N} \dd{x_N}
+\end{aligned}$$
+
+Then we introduce $M$ Lagrange multipliers $\lambda_1, ..., \lambda_M$
+and define $L(x_1, ..., x_N)$:
+
+$$\begin{aligned}
+ L = f + \sum_{m = 1}^M \lambda_m \phi_m
+\end{aligned}$$
+
+As before, we set $\dd{L} = 0$ and choose the multipliers $\lambda_1, ..., \lambda_M$
+to eliminate $M$ of its $N$ terms:
+
+$$\begin{aligned}
+ 0 = \dd{L}
+ = \sum_{n = 1}^N \Big( f_{x_n} + \sum_{m = 1}^M \lambda_m \phi_{x_n} \Big) \dd{x_n}
+\end{aligned}$$
diff --git a/content/know/concept/pulay-mixing/index.pdc b/content/know/concept/pulay-mixing/index.pdc
new file mode 100644
index 0000000..9102c0e
--- /dev/null
+++ b/content/know/concept/pulay-mixing/index.pdc
@@ -0,0 +1,158 @@
+---
+title: "Pulay mixing"
+firstLetter: "P"
+publishDate: 2021-03-02
+categories:
+- Numerical methods
+
+date: 2021-03-02T19:11:51+01:00
+draft: false
+markup: pandoc
+---
+
+# Pulay mixing
+Some numerical problems are most easily solved *iteratively*,
+by generating a series $\rho_1$, $\rho_2$, etc.
+converging towards the desired solution $\rho_*$.
+**Pulay mixing**, also often called
+**direct inversion in the iterative subspace** (DIIS),
+is an effective method to speed up convergence,
+which also helps to avoid periodic divergences.
+
+The key concept it relies on is the **residual vector** $R_n$
+of the $n$th iteration, which in some way measures the error of the current $\rho_n$.
+Its exact definition varies,
+but is generally along the lines of the difference between
+the input of the iteration and the raw resulting output:
+
+$$\begin{aligned}
+ R_n
+ = R[\rho_n]
+ = \rho_n^\mathrm{new}[\rho_n] - \rho_n
+\end{aligned}$$
+
+It is not always clear what to do with $\rho_n^\mathrm{new}$.
+Directly using it as the next input ($\rho_{n+1} = \rho_n^\mathrm{new}$)
+often leads to oscillation,
+and linear mixing ($\rho_{n+1} = (1\!-\!f) \rho_n + f \rho_n^\mathrm{new}$)
+can take a very long time to converge properly.
+Pulay mixing offers an improvement.
+
+The idea is to construct the next iteration's input $\rho_{n+1}$
+as a linear combination of the previous inputs $\rho_1$, $\rho_2$, ..., $\rho_n$,
+such that it is as close as possible to the optimal $\rho_*$:
+
+$$\begin{aligned}
+ \boxed{
+ \rho_{n+1}
+ = \sum_{m = 1}^n \alpha_m \rho_m
+ }
+\end{aligned}$$
+
+To do so, we make two assumptions.
+Firstly, the current $\rho_n$ is already close to $\rho_*$,
+so that such a linear combination makes sense.
+Secondly, the iteration is linear,
+such that the raw output $\rho_{n+1}^\mathrm{new}$
+is also a linear combination with the *same coefficients*:
+
+$$\begin{aligned}
+ \rho_{n+1}^\mathrm{new}
+ = \sum_{m = 1}^n \alpha_m \rho_m^\mathrm{new}
+\end{aligned}$$
+
+We will return to these assumptions later.
+The point is that $R_{n+1}$ is also a linear combination:
+
+$$\begin{aligned}
+ R_{n+1}
+ = \rho_{n+1}^\mathrm{new} - \rho_{n+1}
+ = \sum_{m = 1}^n \alpha_m \rho_m^\mathrm{new} - \sum_{m = 1}^n \alpha_m \rho_m
+ = \sum_{m = 1}^n \alpha_m R_m
+\end{aligned}$$
+
+The goal is to choose the coefficients $\alpha_m$ such that
+the norm of the error $|R_{n+1}| \approx 0$,
+subject to the following constraint to preserve the normalization of $\rho_{n+1}$:
+
+$$\begin{aligned}
+ \sum_{m=1}^n \alpha_m = 1
+\end{aligned}$$
+
+We thus want to minimize the following quantity,
+where $\lambda$ is a [Lagrange multiplier](/know/concept/lagrange-multiplier/):
+
+$$\begin{aligned}
+ \braket{R_{n+1}}{R_{n+1}} + \lambda \sum_{m = 1}^n \alpha_m
+ = \sum_{m=1}^n \alpha_m \Big( \sum_{k=1}^n \alpha_k \braket{R_m}{R_k} + \lambda \Big)
+\end{aligned}$$
+
+By differentiating the right-hand side with respect to $\alpha_m$,
+we get a system of equations that we can write in matrix form,
+which is relatively cheap to solve numerically:
+
+$$\begin{aligned}
+ \begin{bmatrix}
+ \braket{R_1}{R_1} & \cdots & \braket{R_1}{R_n} & 1 \\
+ \vdots & \ddots & \vdots & \vdots \\
+ \braket{R_n}{R_1} & \cdots & \braket{R_n}{R_n} & 1 \\
+ 1 & \cdots & 1 & 0
+ \end{bmatrix}
+ \begin{bmatrix}
+ \alpha_1 \\ \vdots \\ \alpha_n \\ \lambda
+ \end{bmatrix}
+ =
+ \begin{bmatrix}
+ 0 \\ \vdots \\ 0 \\ 1
+ \end{bmatrix}
+\end{aligned}$$
+
+This method is very effective.
+However, in practice, the earlier inputs $\rho_1$, $\rho_2$, etc.
+are much further from $\rho_*$ than $\rho_n$,
+so usually only the most recent $N$ inputs $\rho_{n - N}$, ..., $\rho_n$ are used:
+
+$$\begin{aligned}
+ \rho_{n+1}
+ = \sum_{m = N}^n \alpha_m \rho_m
+\end{aligned}$$
+
+You might be confused by the absence of all $\rho_m^\mathrm{new}$
+in the creation of $\rho_{n+1}$, as if the iteration's outputs are being ignored.
+This is due to the first assumption,
+which states that $\rho_n^\mathrm{new}$ are $\rho_n$ are already similar,
+such that they are interchangeable.
+
+Speaking of which, about those assumptions:
+while they will clearly become more accurate as $\rho_n$ approaches $\rho_*$,
+they might be very dubious in the beginning.
+A consequence of this is that the early iterations might get "trapped"
+in a suboptimal subspace spanned by $\rho_1$, $\rho_2$, etc.
+To say it another way, we would be varying $n$ coefficients $\alpha_m$
+to try to optimize a $D$-dimensional $\rho_{n+1}$,
+where in general $D \gg n$, at least in the beginning.
+
+There is an easy fix to this problem:
+add a small amount of the raw residual $R_m$
+to "nudge" $\rho_{n+1}$ towards the right subspace,
+where $\beta \in [0,1]$ is a tunable parameter:
+
+$$\begin{aligned}
+ \boxed{
+ \rho_{n+1}
+ = \sum_{m = N}^n \alpha_m (\rho_m + \beta R_m)
+ }
+\end{aligned}$$
+
+In other words, we end up introducing a small amount of the raw outputs $\rho_m^\mathrm{new}$,
+while still giving more weight to iterations with smaller residuals.
+
+Pulay mixing is very effective:
+it can accelerate convergence by up to one order of magnitude!
+
+
+
+## References
+1. P. Pulay,
+ [Convergence acceleration of iterative sequences. The case of SCF iteration](https://doi.org/10.1016/0009-2614(80)80396-4),
+ 1980, Elsevier.
diff --git a/content/know/concept/second-quantization/index.pdc b/content/know/concept/second-quantization/index.pdc
new file mode 100644
index 0000000..b8d9a18
--- /dev/null
+++ b/content/know/concept/second-quantization/index.pdc
@@ -0,0 +1,332 @@
+---
+title: "Second quantization"
+firstLetter: "S"
+publishDate: 2021-02-26
+categories:
+- Quantum mechanics
+- Physics
+
+date: 2021-02-26T10:04:16+01:00
+draft: false
+markup: pandoc
+---
+
+# Second quantization
+
+The **second quantization** is a technique to deal with quantum systems
+containing a large and/or variable number of identical particles.
+Its exact formulation depends on
+whether it is fermions or bosons that are being considered
+(see [Pauli exclusion principle](/know/concept/pauli-exclusion-principle/)).
+
+Regardless of whether the system is fermionic or bosonic,
+the idea is to change basis to a set of certain many-particle wave functions,
+known as the **Fock states**, which are specific members of a **Fock space**,
+a special kind of [Hilbert space](/know/concept/hilbert-space/),
+with a well-defined number of particles.
+
+For a set of $N$ single-particle energy eigenstates
+$\psi_n(x)$ and $N$ identical particles $x_n$, the Fock states are
+all the wave functions which contain $n$ particles, for $n$ going from $0$ to $N$.
+
+So for $n = 0$, there is one basis vector with $0$ particles,
+for $n = 1$, there are $N$ basis vectors with $1$ particle each,
+for $n = 2$, there are $N (N \!-\! 1)$ basis vectors with $2$ particles,
+etc.
+
+In this basis, we define the **particle creation operators**
+and **particle annihilation operators**,
+which respectively add/remove a particle to/from a given state.
+In other words, these operators relate the Fock basis vectors
+to one another, and are very useful.
+
+The point is to express the system's state in such a way that the
+fermionic/bosonic constraints are automatically satisfied, and the
+formulae look the same regardless of the number of particles.
+
+
+## Fermions
+
+Fermions need to obey the Pauli exclusion principle, so each state can only
+contain one particle. In this case, the Fock states are given by:
+
+$$\begin{aligned}
+ \boxed{
+ \begin{aligned}
+ n &= 0:
+ \qquad \ket{0, 0, 0, ...}
+ \\
+ n &= 1:
+ \qquad \ket{1, 0, 0, ...} \quad \ket{0, 1, 0, ...} \quad \ket{0, 0, 1, ...} \quad \cdots
+ \\
+ n &= 2:
+ \qquad \ket{1, 1, 0, ...} \quad \ket{1, 0, 1, ...} \quad \ket{0, 1, 1, ...} \quad \cdots
+ \end{aligned}
+ }
+\end{aligned}$$
+
+The notation $\ket{N_\alpha, N_\beta, ...}$ is shorthand for
+the appropriate [Slater determinants](/know/concept/slater-determinant/).
+As an example, take $\ket{0, 1, 0, 1, 1}$,
+which contains three particles $a$, $b$ and $c$
+in states 2, 4 and 5:
+
+$$\begin{aligned}
+ \ket{0, 1, 0, 1, 1}
+ = \Psi(x_a, x_b, x_c)
+ = \frac{1}{\sqrt{3!}} \det\!
+ \begin{bmatrix}
+ \psi_2(x_a) & \psi_4(x_a) & \psi_5(x_a) \\
+ \psi_2(x_b) & \psi_4(x_b) & \psi_5(x_b) \\
+ \psi_2(x_c) & \psi_4(x_c) & \psi_5(x_c)
+ \end{bmatrix}
+\end{aligned}$$
+
+The creation operator $\hat{c}_\alpha^\dagger$ and annihilation
+operator $\hat{c}_\alpha$ are defined to live up to their name:
+they create or destroy a particle in the state $\psi_\alpha$:
+
+$$\begin{aligned}
+ \boxed{
+ \begin{aligned}
+ \hat{c}_\alpha^\dagger \ket{... (N_\alpha\!=\!0) ...}
+ &= J_\alpha \ket{... (N_\alpha\!=\!1) ...}
+ \\
+ \hat{c}_\alpha \ket{... (N_\alpha\!=\!1) ...}
+ &= J_\alpha \ket{... (N_\alpha\!=\!0) ...}
+ \end{aligned}
+ }
+\end{aligned}$$
+
+The factor $J_\alpha$ is sometimes known as the **Jordan-Wigner string**,
+and is necessary here to enforce the fermionic antisymmetry,
+when creating or destroying a particle in the $\alpha$th state:
+
+$$\begin{aligned}
+ J_\alpha = (-1)^{\sum_{j < \alpha} N_j}
+\end{aligned}$$
+
+So, for example, when creating a particle in state 4
+of $\ket{0, 1, 1, 0, 1}$, we get the following:
+
+$$\begin{aligned}
+ \hat{c}_4^\dagger \ket{0, 1, 1, 0, 1}
+ = (-1)^{0 + 1 + 1} \ket{0, 1, 1, 1, 1}
+\end{aligned}$$
+
+The point of the Jordan-Wigner string
+is that the order matters when applying the creation and annihilation operators:
+
+$$\begin{aligned}
+ \hat{c}_1^\dagger \hat{c}_2 \ket{0, 1}
+ &= \hat{c}_1^\dagger \ket{0, 0}
+ = \ket{1, 0}
+ \\
+ \hat{c}_2 \hat{c}_1^\dagger \ket{0, 1}
+ &= \hat{c}_2 \ket{1, 1}
+ = - \ket{1, 0}
+\end{aligned}$$
+
+In other words, $\hat{c}_1^\dagger \hat{c}_2 = - \hat{c}_2 \hat{c}_1^\dagger$,
+meaning that the anticommutator $\{\hat{c}_2, \hat{c}_1^\dagger\} = 0$.
+You can verify for youself that
+the general anticommutators of these operators are given by:
+
+$$\begin{aligned}
+ \boxed{
+ \{\hat{c}_\alpha, \hat{c}_\beta\} = \{\hat{c}_\alpha^\dagger, \hat{c}_\beta^\dagger\} = 0
+ \qquad \quad
+ \{\hat{c}_\alpha, \hat{c}_\beta^\dagger\} = \delta_{\alpha\beta}
+ }
+\end{aligned}$$
+
+Each single-particle state can only contain 0 or 1 fermions,
+so these operators **quench** states that would violate this rule.
+Note that these are *scalar* zeros:
+
+$$\begin{aligned}
+ \boxed{
+ \hat{c}_\alpha^\dagger \ket{... (N_\alpha\!=\!1) ...} = 0
+ \qquad \quad
+ \hat{c}_\alpha \ket{... (N_\alpha\!=\!0) ...} = 0
+ }
+\end{aligned}$$
+
+Finally, as has already been suggested by the notation, they are each other's adjoint:
+
+$$\begin{aligned}
+ \matrixel{... (N_\alpha\!=\!1) ...}{\hat{c}_\alpha^\dagger}{... (N_\alpha\!=\!0) ...}
+ = \matrixel{...(N_\alpha\!=\!0) ...}{\hat{c}_\alpha}{... (N_\alpha\!=\!1) ...}
+\end{aligned}$$
+
+Let us now use these operators to define the **number operator** $\hat{N}_\alpha$ as follows:
+
+$$\begin{aligned}
+ \boxed{
+ \hat{N}_\alpha = \hat{c}_\alpha^\dagger \hat{c}_\alpha
+ }
+\end{aligned}$$
+
+Its eigenvalue is the number of particles residing in state $\psi_\alpha$
+(look at the hats):
+
+$$\begin{aligned}
+ \hat{N}_\alpha \ket{... N_\alpha ...}
+ = N_\alpha \ket{... N_\alpha ...}
+\end{aligned}$$
+
+
+## Bosons
+
+Bosons do not need to obey the Pauli exclusion principle, so multiple can occupy a single state.
+The Fock states are therefore as follows:
+
+$$\begin{aligned}
+ \boxed{
+ \begin{aligned}
+ n &= 0:
+ \qquad \ket{0, 0, 0, ...}
+ \\
+ n &= 1:
+ \qquad \ket{1, 0, 0, ...} \quad \ket{0, 1, 0, ...} \quad \ket{0, 0, 1, ...} \quad \cdots
+ \\
+ n &= 2:
+ \qquad \ket{1, 1, 0, ...} \quad \ket{1, 0, 1, ...} \quad \ket{0, 1, 1, ...} \quad \cdots
+ \\
+ &\qquad\:\:\:
+ \qquad \ket{2, 0, 0, ...} \quad \ket{0, 2, 0, ...} \quad \ket{0, 0, 2, ...} \quad \cdots
+ \end{aligned}
+ }
+\end{aligned}$$
+
+They must be symmetric under the exchange of two bosons.
+To achieve this, the Fock states are represented by Slater *permanents*
+rather than determinants.
+
+The boson creation and annihilation operators $\hat{c}_\alpha^\dagger$ and
+$\hat{c}_\alpha$ are straightforward:
+
+$$\begin{gathered}
+ \boxed{
+ \begin{aligned}
+ \hat{c}_\alpha^\dagger \ket{... N_\alpha ...}
+ &= \sqrt{N_\alpha + 1} \: \ket{... (N_\alpha \!+\! 1) ...}
+ \\
+ \hat{c}_\alpha \ket{... N_\alpha ...}
+ &= \sqrt{N_\alpha} \: \ket{... (N_\alpha \!-\! 1) ...}
+ \end{aligned}
+}\end{gathered}$$
+
+Applying the annihilation operator $\hat{c}_\alpha$ when there are zero
+particles in $\alpha$ will quench the state:
+
+$$\begin{aligned}
+ \boxed{
+ \hat{c}_\alpha \ket{... (N_\alpha\!=\!0) ...} = 0
+ }
+\end{aligned}$$
+
+There is no Jordan-Wigner string, and therefore no sign change when commuting.
+Consequently, these operators therefore satisfy the following:
+
+$$\begin{aligned}
+ \boxed{
+ [\hat{c}_\alpha, \hat{c}_\beta] = [\hat{c}_\alpha^\dagger, \hat{c}_\beta^\dagger] = 0
+ \qquad
+ [\hat{c}_\alpha, \hat{c}_\beta^\dagger] = \delta_{\alpha\beta}
+ }
+\end{aligned}$$
+
+The constant factors applied by $\hat{c}_\alpha^\dagger$ and $\hat{c}_\alpha$
+ensure that $\hat{N}_\alpha$ keeps the same nice form:
+
+$$\begin{aligned}
+ \boxed{
+ \hat{N}_\alpha = \hat{c}_\alpha^\dagger \hat{c}_\alpha
+ }
+\end{aligned}$$
+
+
+## Operators
+
+Traditionally, an operator $\hat{V}$ simultaneously acting on $N$ indentical particles
+is the sum of the individual single-particle operators $\hat{V}_1$ acting on the $n$th particle:
+
+$$\begin{aligned}
+ \hat{V}
+ = \sum_{n = 1}^N \hat{V}_1
+\end{aligned}$$
+
+This can be rewritten using the second quantization operators as follows:
+
+$$\begin{aligned}
+ \boxed{
+ \hat{V}
+ = \sum_{\alpha, \beta} \matrixel{\alpha}{\hat{V}_1}{\beta} \hat{c}_\alpha^\dagger \hat{c}_\beta
+ }
+\end{aligned}$$
+
+Where the matrix element $\matrixel{\alpha}{\hat{V}_1}{\beta}$ is to be
+evaluated in the normal way:
+
+$$\begin{aligned}
+ \matrixel{\alpha}{\hat{V}_1}{\beta}
+ = \int \psi_\alpha^*(\vec{r}) \: \hat{V}_1(\vec{r}) \: \psi_\beta(\vec{r}) \dd{\vec{r}}
+\end{aligned}$$
+
+Similarly, given some two-particle operator $\hat{V}$ in first-quantized form:
+
+$$\begin{aligned}
+ \hat{V}
+ = \sum_{n \neq m} v(\vec{r}_n, \vec{r}_m)
+\end{aligned}$$
+
+We can rewrite this in second-quantized form as follows.
+Note the ordering of the subscripts:
+
+$$\begin{aligned}
+ \boxed{
+ \hat{V}
+ = \sum_{\alpha, \beta, \gamma, \delta}
+ v_{\alpha \beta \gamma \delta} \hat{c}_\alpha^\dagger \hat{c}_\beta^\dagger \hat{c}_\delta \hat{c}_\gamma
+ }
+\end{aligned}$$
+
+Where the constant $v_{\alpha \beta \gamma \delta}$ is defined from the
+single-particle wave functions:
+
+$$\begin{aligned}
+ v_{\alpha \beta \gamma \delta}
+ = \iint \psi_\alpha^*(\vec{r}_1) \: \psi_\beta^*(\vec{r}_2)
+ \: v(\vec{r}_1, \vec{r}_2) \: \psi_\gamma(\vec{r}_1)
+ \: \psi_\delta(\vec{r}_2) \dd{\vec{r}_1} \dd{\vec{r}_2}
+\end{aligned}$$
+
+Finally, in the second quantization, changing basis is done in the usual way:
+
+$$\begin{aligned}
+ \hat{c}_b^\dagger \ket{0}
+ = \ket{b}
+ = \sum_{\alpha} \ket{\alpha} \braket{\alpha}{b}
+ = \sum_{\alpha} \braket{\alpha}{b} \hat{c}_\alpha^\dagger \ket{0}
+\end{aligned}$$
+
+Where $\alpha$ and $b$ need not be in the same basis.
+With this, we can define the **field operators**,
+which create or destroy a particle at a given position $\vec{r}$:
+
+$$\begin{aligned}
+ \boxed{
+ \hat{\psi}^\dagger(\vec{r})
+ = \sum_{\alpha} \braket{\alpha}{\vec{r}} \hat{c}_\alpha^\dagger
+ \qquad \quad
+ \hat{\psi}(\vec{r})
+ = \sum_{\alpha} \braket{\vec{r}}{\alpha} \hat{c}_\alpha
+ }
+\end{aligned}$$
+
+
+## References
+1. L.E. Ballentine,
+ *Quantum mechanics: a modern development*, 2nd edition,
+ World Scientific.