From 88d42030530abeba4f3ceaf63da71e6cdfa71267 Mon Sep 17 00:00:00 2001 From: Prefetch Date: Sat, 20 Feb 2021 20:21:32 +0100 Subject: Stop tracking the knowledge base's index.html files --- static/know/concept/blochs-theorem/index.html | 103 --------- static/know/concept/dirac-notation/index.html | 137 ------------ .../concept/pauli-exclusion-principle/index.html | 107 ---------- static/know/concept/probability-current/index.html | 108 ---------- .../index.html | 230 --------------------- 5 files changed, 685 deletions(-) delete mode 100644 static/know/concept/blochs-theorem/index.html delete mode 100644 static/know/concept/dirac-notation/index.html delete mode 100644 static/know/concept/pauli-exclusion-principle/index.html delete mode 100644 static/know/concept/probability-current/index.html delete mode 100644 static/know/concept/time-independent-perturbation-theory/index.html (limited to 'static/know/concept') diff --git a/static/know/concept/blochs-theorem/index.html b/static/know/concept/blochs-theorem/index.html deleted file mode 100644 index f977739..0000000 --- a/static/know/concept/blochs-theorem/index.html +++ /dev/null @@ -1,103 +0,0 @@ - - -
- - - -In quantum mechanics, Bloch’s theorem states that, given a potential \(V(\vec{r})\) which is periodic on a lattice, i.e. \(V(\vec{r}) = V(\vec{r} + \vec{a})\) for a primitive lattice vector \(\vec{a}\), then it follows that the solutions \(\psi(\vec{r})\) to the time-independent Schrödinger equation take the following form, where the function \(u(\vec{r})\) is periodic on the same lattice, i.e. \(u(\vec{r}) = u(\vec{r} + \vec{a})\):
-\[ -\begin{aligned} - \boxed{ - \psi(\vec{r}) = u(\vec{r}) e^{i \vec{k} \cdot \vec{r}} - } -\end{aligned} -\]
-In other words, in a periodic potential, the solutions are simply plane waves with a periodic modulation, known as Bloch functions or Bloch states.
-This is suprisingly easy to prove: if the Hamiltonian \(\hat{H}\) is lattice-periodic, then it will commute with the unitary translation operator \(\hat{T}(\vec{a})\), i.e. \([\hat{H}, \hat{T}(\vec{a})] = 0\). Therefore \(\hat{H}\) and \(\hat{T}(\vec{a})\) must share eigenstates \(\psi(\vec{r})\):
-\[ -\begin{aligned} - \hat{H} \:\psi(\vec{r}) = E \:\psi(\vec{r}) - \qquad - \hat{T}(\vec{a}) \:\psi(\vec{r}) = \tau \:\psi(\vec{r}) -\end{aligned} -\]
-Since \(\hat{T}\) is unitary, its eigenvalues \(\tau\) must have the form \(e^{i \theta}\), with \(\theta\) real. Therefore a translation by \(\vec{a}\) causes a phase shift, for some vector \(\vec{k}\):
-\[ -\begin{aligned} - \psi(\vec{r} + \vec{a}) - = \hat{T}(\vec{a}) \:\psi(\vec{r}) - = e^{i \theta} \:\psi(\vec{r}) - = e^{i \vec{k} \cdot \vec{a}} \:\psi(\vec{r}) -\end{aligned} -\]
-Let us now define the following function, keeping our arbitrary choice of \(\vec{k}\):
-\[ -\begin{aligned} - u(\vec{r}) - = e^{- i \vec{k} \cdot \vec{r}} \:\psi(\vec{r}) -\end{aligned} -\]
-As it turns out, this function is guaranteed to be lattice-periodic for any \(\vec{k}\):
-\[ -\begin{aligned} - u(\vec{r} + \vec{a}) - &= e^{- i \vec{k} \cdot (\vec{r} + \vec{a})} \:\psi(\vec{r} + \vec{a}) - \\ - &= e^{- i \vec{k} \cdot \vec{r}} e^{- i \vec{k} \cdot \vec{a}} e^{i \vec{k} \cdot \vec{a}} \:\psi(\vec{r}) - \\ - &= e^{- i \vec{k} \cdot \vec{r}} \:\psi(\vec{r}) - \\ - &= u(\vec{r}) -\end{aligned} -\]
-Then Bloch’s theorem follows from isolating the definition of \(u(\vec{r})\) for \(\psi(\vec{r})\).
-Dirac notation is a notation to do calculations in a Hilbert space without needing to worry about the space’s representation. It is basically the lingua franca of quantum mechanics.
-In Dirac notation there are kets \(\ket{V}\) from the Hilbert space \(\mathbb{H}\) and bras \(\bra{V}\) from a dual \(\mathbb{H}'\) of the former. Crucially, the bras and kets are from different Hilbert spaces and therefore cannot be added, but every bra has a corresponding ket and vice versa.
-Bras and kets can only be combined in two ways: the inner product \(\braket{V}{W}\), which returns a scalar, and the outer product \(\ket{V} \bra{W}\), which returns a mapping \(\hat{L}\) from kets \(\ket{V}\) to other kets \(\ket{V'}\), i.e. a linear operator. Recall that the Hilbert inner product must satisfy:
-\[\begin{aligned} - \braket{V}{W} = \braket{W}{V}^* -\end{aligned}\]
-So far, nothing has been said about the actual representation of bras or kets. If we represent kets as \(N\)-dimensional columns vectors, the corresponding bras are given by the kets’ adjoints, i.e. their transpose conjugates:
-\[\begin{aligned} - \ket{V} = - \begin{bmatrix} - v_1 \\ \vdots \\ v_N - \end{bmatrix} - \quad \implies \quad - \bra{V} = - \begin{bmatrix} - v_1^* & \cdots & v_N^* - \end{bmatrix} -\end{aligned}\]
-The inner product \(\braket{V}{W}\) is then just the familiar dot product \(V \cdot W\):
-\[\begin{gathered} - \braket{V}{W} - = - \begin{bmatrix} - v_1^* & \cdots & v_N^* - \end{bmatrix} - \cdot - \begin{bmatrix} - w_1 \\ \vdots \\ w_N - \end{bmatrix} - = v_1^* w_1 + ... + v_N^* w_N -\end{gathered}\]
-Meanwhile, the outer product \(\ket{V} \bra{W}\) creates an \(N \cross N\) matrix:
-\[\begin{gathered} - \ket{V} \bra{W} - = - \begin{bmatrix} - v_1 \\ \vdots \\ v_N - \end{bmatrix} - \cdot - \begin{bmatrix} - w_1^* & \cdots & w_N^* - \end{bmatrix} - = - \begin{bmatrix} - v_1 w_1^* & \cdots & v_1 w_N^* \\ - \vdots & \ddots & \vdots \\ - v_N w_1^* & \cdots & v_N w_N^* - \end{bmatrix} -\end{gathered}\]
-If the kets are instead represented by functions \(f(x)\) of \(x \in [a, b]\), then the bras represent functionals \(F[u(x)]\) which take an unknown function \(u(x)\) as an argument and turn it into a scalar using integration:
-\[\begin{aligned} - \ket{f} = f(x) - \quad \implies \quad - \bra{f} - = F[u(x)] - = \int_a^b f^*(x) \: u(x) \dd{x} -\end{aligned}\]
-Consequently, the inner product is simply the following familiar integral:
-\[\begin{gathered} - \braket{f}{g} - = F[g(x)] - = \int_a^b f^*(x) \: g(x) \dd{x} -\end{gathered}\]
-However, the outer product becomes something rather abstract:
-\[\begin{gathered} - \ket{f} \bra{g} - = f(x) \: G[u(x)] - = f(x) \int_a^b g^*(\xi) \: u(\xi) \dd{\xi} -\end{gathered}\]
-This result makes more sense if we surround it by a bra and a ket:
-\[\begin{aligned} - \bra{u} \!\Big(\!\ket{f} \bra{g}\!\Big)\! \ket{w} - &= U\big[f(x) \: G[w(x)]\big] - = U\Big[ f(x) \int_a^b g^*(\xi) \: w(\xi) \dd{\xi} \Big] - \\ - &= \int_a^b u^*(x) \: f(x) \: \Big(\int_a^b g^*(\xi) \: w(\xi) \dd{\xi} \Big) \dd{x} - \\ - &= \Big( \int_a^b u^*(x) \: f(x) \dd{x} \Big) \Big( \int_a^b g^*(\xi) \: w(\xi) \dd{\xi} \Big) - \\ - &= \braket{u}{f} \braket{g}{w} -\end{aligned}\]
-In quantum mechanics, the Pauli exclusion principle is a theorem that has profound consequences for how the world works.
-Suppose we have a composite state \(\ket*{x_1}\ket*{x_2} = \ket*{x_1} \otimes \ket*{x_2}\), where the two identical particles \(x_1\) and \(x_2\) each can occupy the same two allowed states \(a\) and \(b\). We then define the permutation operator \(\hat{P}\) as follows:
-\[\begin{aligned} - \hat{P} \ket{a}\ket{b} = \ket{b}\ket{a} -\end{aligned}\]
-That is, it swaps the states of the particles. Obviously, swapping the states twice simply gives the original configuration again, so:
-\[\begin{aligned} - \hat{P}^2 \ket{a}\ket{b} = \ket{a}\ket{b} -\end{aligned}\]
-Therefore, \(\ket{a}\ket{b}\) is an eigenvector of \(\hat{P}^2\) with eigenvalue \(1\). Since \([\hat{P}, \hat{P}^2] = 0\), \(\ket{a}\ket{b}\) must also be an eigenket of \(\hat{P}\) with eigenvalue \(\lambda\), satisfying \(\lambda^2 = 1\), so we know that \(\lambda = 1\) or \(\lambda = -1\).
-As it turns out, in nature, each class of particle has a single associated permutation eigenvalue \(\lambda\), or in other words: whether \(\lambda\) is \(-1\) or \(1\) depends on the species of particle that \(x_1\) and \(x_2\) represent. Particles with \(\lambda = -1\) are called fermions, and those with \(\lambda = 1\) are known as bosons. We define \(\hat{P}_f\) with \(\lambda = -1\) and \(\hat{P}_b\) with \(\lambda = 1\), such that:
-\[\begin{aligned} - \hat{P}_f \ket{a}\ket{b} = \ket{b}\ket{a} = - \ket{a}\ket{b} - \qquad - \hat{P}_b \ket{a}\ket{b} = \ket{b}\ket{a} = \ket{a}\ket{b} -\end{aligned}\]
-Another fundamental fact of nature is that identical particles cannot be distinguished by any observation. Therefore it is impossible to tell apart \(\ket{a}\ket{b}\) and the permuted state \(\ket{b}\ket{a}\), regardless of the eigenvalue \(\lambda\). There is no physical difference!
-But this does not mean that \(\hat{P}\) is useless: despite not having any observable effect, the resulting difference between fermions and bosons is absolutely fundamental. Consider the following superposition state, where \(\alpha\) and \(\beta\) are unknown:
-\[\begin{aligned} - \ket{\Psi(a, b)} - = \alpha \ket{a}\ket{b} + \beta \ket{b}\ket{a} -\end{aligned}\]
-When we apply \(\hat{P}\), we can “choose” between two “intepretations” of its action, both shown below. Obviously, since the left-hand sides are equal, the right-hand sides must be equal too:
-\[\begin{aligned} - \hat{P} \ket{\Psi(a, b)} - &= \lambda \alpha \ket{a}\ket{b} + \lambda \beta \ket{b}\ket{a} - \\ - \hat{P} \ket{\Psi(a, b)} - &= \alpha \ket{b}\ket{a} + \beta \ket{a}\ket{b} -\end{aligned}\]
-This gives us the equations \(\lambda \alpha = \beta\) and \(\lambda \beta = \alpha\). In fact, just from this we could have deduced that \(\lambda\) can be either \(-1\) or \(1\). In any case, for bosons (\(\lambda = 1\)), we thus find that \(\alpha = \beta\):
-\[\begin{aligned} - \ket{\Psi(a, b)}_b = C \big( \ket{a}\ket{b} + \ket{b}\ket{a} \big) -\end{aligned}\]
-Where \(C\) is a normalization constant. As expected, this state is symmetric: switching \(a\) and \(b\) gives the same result. Meanwhile, for fermions (\(\lambda = -1\)), we find that \(\alpha = -\beta\):
-\[\begin{aligned} - \ket{\Psi(a, b)}_f = C \big( \ket{a}\ket{b} - \ket{b}\ket{a} \big) -\end{aligned}\]
-This state called antisymmetric under exchange: switching \(a\) and \(b\) causes a sign change, as we would expect for fermions.
-Now, what if the particles \(x_1\) and \(x_2\) are in the same state \(a\)? For bosons, we just need to update the normalization constant \(C\):
-\[\begin{aligned} - \ket{\Psi(a, a)}_b - = C \ket{a}\ket{a} -\end{aligned}\]
-However, for fermions, the state is unnormalizable and thus unphysical:
-\[\begin{aligned} - \ket{\Psi(a, a)}_f - = C \big( \ket{a}\ket{a} - \ket{a}\ket{a} \big) - = 0 -\end{aligned}\]
-At last, this is the Pauli exclusion principle: fermions may never occupy the same quantum state. One of the many notable consequences of this is that the shells of an atom only fit a limited number of electrons, since each must have a different quantum number.
-In quantum mechanics, the probability current describes the movement of the probability of finding a particle at given point in space. In other words, it treats the particle as a heterogeneous fluid with density \(|\psi|^2\). Now, the probability of finding the particle within a volume \(V\) is:
-\[\begin{aligned} - P = \int_{V} | \psi |^2 \dd[3]{\vec{r}} -\end{aligned}\]
-As the system evolves in time, this probability may change, so we take its derivative with respect to time \(t\), and when necessary substitute in the other side of the Schrödinger equation to get:
-\[\begin{aligned} - \pdv{P}{t} - &= \int_{V} \psi \pdv{\psi^*}{t} + \psi^* \pdv{\psi}{t} \dd[3]{\vec{r}} - = \frac{i}{\hbar} \int_{V} \psi (\hat{H} \psi^*) - \psi^* (\hat{H} \psi) \dd[3]{\vec{r}} - \\ - &= \frac{i}{\hbar} \int_{V} \psi \Big( \!-\! \frac{\hbar^2}{2 m} \nabla^2 \psi^* + V(\vec{r}) \psi^* \Big) - - \psi^* \Big( \!-\! \frac{\hbar^2}{2 m} \nabla^2 \psi + V(\vec{r}) \psi \Big) \dd[3]{\vec{r}} - \\ - &= \frac{i \hbar}{2 m} \int_{V} - \psi \nabla^2 \psi^* + \psi^* \nabla^2 \psi \dd[3]{\vec{r}} - = - \int_{V} \nabla \cdot \vec{J} \dd[3]{\vec{r}} -\end{aligned}\]
-Where we have defined the probability current \(\vec{J}\) as follows in the \(\vec{r}\)-basis:
-\[\begin{aligned} - \vec{J} - = \frac{i \hbar}{2 m} (\psi \nabla \psi^* - \psi^* \nabla \psi) - = \mathrm{Re} \Big\{ \psi \frac{i \hbar}{m} \psi^* \Big\} -\end{aligned}\]
-Let us rewrite this using the momentum operator \(\hat{p} = -i \hbar \nabla\) as follows, noting that \(\hat{p} / m\) is simply the velocity operator \(\hat{v}\):
-\[\begin{aligned} - \boxed{ - \vec{J} - = \frac{1}{2 m} ( \psi^* \hat{p} \psi - \psi \hat{p} \psi^*) - = \mathrm{Re} \Big\{ \psi^* \frac{\hat{p}}{m} \psi \Big\} - = \mathrm{Re} \{ \psi^* \hat{v} \psi \} - } -\end{aligned}\]
-Returning to the derivation of \(\vec{J}\), we now have the following equation:
-\[\begin{aligned} - \pdv{P}{t} - = \int_{V} \pdv{|\psi|^2}{t} \dd[3]{\vec{r}} - = - \int_{V} \nabla \cdot \vec{J} \dd[3]{\vec{r}} -\end{aligned}\]
-By removing the integrals, we thus arrive at the continuity equation for \(\vec{J}\):
-\[\begin{aligned} - \boxed{ - \nabla \cdot \vec{J} - = - \pdv{|\psi|^2}{t} - } -\end{aligned}\]
-This states that the total probability is conserved, and is reminiscent of charge conservation in electromagnetism. In other words, the probability at a point can only change by letting it “flow” towards or away from it. Thus \(\vec{J}\) represents the flow of probability, which is analogous to the motion of a particle.
-As a bonus, this still holds for a particle in an electromagnetic vector potential \(\vec{A}\), thanks to the gauge invariance of the Schrödinger equation. We can thus extend the definition to a particle with charge \(q\) in an SI-unit field, neglecting spin:
-\[\begin{aligned} - \boxed{ - \vec{J} - = \mathrm{Re} \Big\{ \psi^* \frac{\hat{p} - q \vec{A}}{m} \psi \Big\} - } -\end{aligned}\]
-Time-independent perturbation theory, sometimes also called stationary state perturbation theory, is a specific application of perturbation theory to the time-independent Schrödinger equation in quantum physics, for Hamiltonians of the following form:
-\[\begin{aligned} - \hat{H} = \hat{H}_0 + \lambda \hat{H}_1 -\end{aligned}\]
-Where \(\hat{H}_0\) is a Hamiltonian for which the time-independent Schrödinger equation has a known solution, and \(\hat{H}_1\) is a small perturbing Hamiltonian. The eigenenergies \(E_n\) and eigenstates \(\ket{\psi_n}\) of the composite problem are expanded in the perturbation “bookkeeping” parameter \(\lambda\):
-\[\begin{aligned} - \ket{\psi_n} - &= \ket*{\psi_n^{(0)}} + \lambda \ket*{\psi_n^{(1)}} + \lambda^2 \ket*{\psi_n^{(2)}} + ... - \\ - E_n - &= E_n^{(0)} + \lambda E_n^{(1)} + \lambda^2 E_n^{(2)} + ... -\end{aligned}\]
-Where \(E_n^{(1)}\) and \(\ket*{\psi_n^{(1)}}\) are called the first-order corrections, and so on for higher orders. We insert this into the Schrödinger equation:
-\[\begin{aligned} - \hat{H} \ket{\psi_n} - &= \hat{H}_0 \ket*{\psi_n^{(0)}} - + \lambda \big( \hat{H}_1 \ket*{\psi_n^{(0)}} + \hat{H}_0 \ket*{\psi_n^{(1)}} \big) \\ - &\qquad + \lambda^2 \big( \hat{H}_1 \ket*{\psi_n^{(1)}} + \hat{H}_0 \ket*{\psi_n^{(2)}} \big) + ... - \\ - E_n \ket{\psi_n} - &= E_n^{(0)} \ket*{\psi_n^{(0)}} - + \lambda \big( E_n^{(1)} \ket*{\psi_n^{(0)}} + E_n^{(0)} \ket*{\psi_n^{(1)}} \big) \\ - &\qquad + \lambda^2 \big( E_n^{(2)} \ket*{\psi_n^{(0)}} + E_n^{(1)} \ket*{\psi_n^{(1)}} + E_n^{(0)} \ket*{\psi_n^{(2)}} \big) + ... -\end{aligned}\]
-If we collect the terms according to the order of \(\lambda\), we arrive at the following endless series of equations, of which in practice only the first three are typically used:
-\[\begin{aligned} - \hat{H}_0 \ket*{\psi_n^{(0)}} - &= E_n^{(0)} \ket*{\psi_n^{(0)}} - \\ - \hat{H}_1 \ket*{\psi_n^{(0)}} + \hat{H}_0 \ket*{\psi_n^{(1)}} - &= E_n^{(1)} \ket*{\psi_n^{(0)}} + E_n^{(0)} \ket*{\psi_n^{(1)}} - \\ - \hat{H}_1 \ket*{\psi_n^{(1)}} + \hat{H}_0 \ket*{\psi_n^{(2)}} - &= E_n^{(2)} \ket*{\psi_n^{(0)}} + E_n^{(1)} \ket*{\psi_n^{(1)}} + E_n^{(0)} \ket*{\psi_n^{(2)}} - \\ - ... - &= ... -\end{aligned}\]
-The first equation is the unperturbed problem, which we assume has already been solved, with eigenvalues \(E_n^{(0)} = \varepsilon_n\) and eigenvectors \(\ket*{\psi_n^{(0)}} = \ket{n}\):
-\[\begin{aligned} - \hat{H}_0 \ket{n} = \varepsilon_n \ket{n} -\end{aligned}\]
-The approach to solving the other two equations varies depending on whether this \(\hat{H}_0\) has a degenerate spectrum or not.
-We start by assuming that there is no degeneracy, in other words, each \(\varepsilon_n\) corresponds to one \(\ket{n}\). At order \(\lambda^1\), we rewrite the equation as follows:
-\[\begin{aligned} - (\hat{H}_1 - E_n^{(1)}) \ket{n} + (\hat{H}_0 - \varepsilon_n) \ket*{\psi_n^{(1)}} = 0 -\end{aligned}\]
-Since \(\ket{n}\) form a complete basis, we can express \(\ket*{\psi_n^{(1)}}\) in terms of them:
-\[\begin{aligned} - \ket*{\psi_n^{(1)}} = \sum_{m \neq n} c_m \ket{m} -\end{aligned}\]
-Importantly, \(n\) has been removed from the summation to prevent dividing by zero later. We are allowed to do this, because \(\ket*{\psi_n^{(1)}} - c_n \ket{n}\) also satisfies the order-\(\lambda^1\) equation for any value of \(c_n\), as demonstrated here:
-\[\begin{aligned} - (\hat{H}_1 - E_n^{(1)}) \ket{n} + (\hat{H}_0 - \varepsilon_n) \ket*{\psi_n^{(1)}} - (\varepsilon_n - \varepsilon_n) c_n \ket{n} = 0 -\end{aligned}\]
-Where we used \(\hat{H}_0 \ket{n} = \varepsilon_n \ket{n}\). We insert the series form of \(\ket*{\psi_n^{(1)}}\) into the \(\lambda^1\)-equation:
-\[\begin{aligned} - (\hat{H}_1 - E_n^{(1)}) \ket{n} + \sum_{m \neq n} c_m (\varepsilon_m - \varepsilon_n) \ket{m} = 0 -\end{aligned}\]
-We then put an arbitrary basis vector \(\bra{k}\) in front of this equation to get:
-\[\begin{aligned} - \matrixel{k}{\hat{H}_1}{n} - E_n^{(1)} \braket{k}{n} + \sum_{m \neq n} c_m (\varepsilon_m - \varepsilon_n) \braket{k}{m} = 0 -\end{aligned}\]
-Suppose that \(k = n\). Since \(\ket{n}\) form an orthonormal basis, we end up with:
-\[\begin{aligned} - \boxed{ - E_n^{(1)} = \matrixel{n}{\hat{H}_1}{n} - } -\end{aligned}\]
-In other words, the first-order energy correction \(E_n^{(1)}\) is the expectation value of the perturbation \(\hat{H}_1\) for the unperturbed state \(\ket{n}\).
-Suppose now that \(k \neq n\), then only one term of the summation survives, and we are left with the following equation, which tells us \(c_l\):
-\[\begin{aligned} - \matrixel{k}{\hat{H}_1}{n} + c_k (\varepsilon_k - \varepsilon_n) = 0 -\end{aligned}\]
-We isolate this result for \(c_k\) and insert it into the series form of \(\ket*{\psi_n^{(1)}}\) to get the full first-order correction to the wave function:
-\[\begin{aligned} - \boxed{ - \ket*{\psi_n^{(1)}} - = \sum_{m \neq n} \frac{\matrixel{m}{\hat{H}_1}{n}}{\varepsilon_n - \varepsilon_m} \ket{m} - } -\end{aligned}\]
-Here it is clear why this is only valid in the non-degenerate case: otherwise we would divide by zero in the denominator.
-Next, to find the second-order correction to the energy \(E_n^{(2)}\), we take the corresponding equation and put \(\bra{n}\) in front of it:
-\[\begin{aligned} - \matrixel{n}{\hat{H}_1}{\psi_n^{(1)}} + \matrixel{n}{\hat{H}_0}{\psi_n^{(2)}} - &= E_n^{(2)} \braket{n}{n} + E_n^{(1)} \braket{n}{\psi_n^{(1)}} + \varepsilon_n \braket{n}{\psi_n^{(2)}} -\end{aligned}\]
-Because \(\hat{H}_0\) is Hermitian, we know that \(\matrixel{n}{\hat{H}_0}{\psi_n^{(2)}} = \varepsilon_n \braket{n}{\psi_n^{(2)}}\), i.e. we apply it to the bra, which lets us eliminate two terms. Also, since \(\ket{n}\) is normalized, we find:
-\[\begin{aligned} - E_n^{(2)} - = \matrixel{n}{\hat{H}_1}{\psi_n^{(1)}} - E_n^{(1)} \braket{n}{\psi_n^{(1)}} -\end{aligned}\]
-We explicitly removed the \(\ket{n}\)-dependence of \(\ket*{\psi_n^{(1)}}\), so the last term is zero. By simply inserting our result for \(\ket*{\psi_n^{(1)}}\), we thus arrive at:
-\[\begin{aligned} - \boxed{ - E_n^{(2)} - = \sum_{m \neq n} \frac{\big| \matrixel{m}{\hat{H}_1}{n} \big|^2}{\varepsilon_n - \varepsilon_m} - } -\end{aligned}\]
-In practice, it is not particulary useful to calculate more corrections.
-If \(\varepsilon_n\) is \(D\)-fold degenerate, then its eigenstate could be any vector \(\ket{n, d}\) from the corresponding \(D\)-dimensional eigenspace:
-\[\begin{aligned} - \hat{H}_0 \ket{n} = \varepsilon_n \ket{n} - \quad \mathrm{where} \quad - \ket{n} - = \sum_{d = 1}^{D} c_{d} \ket{n, d} -\end{aligned}\]
-In general, adding the perturbation \(\hat{H}_1\) will lift the degeneracy, meaning the perturbed states will be non-degenerate. In the limit \(\lambda \to 0\), these \(D\) perturbed states change into \(D\) orthogonal states which are all valid \(\ket{n}\).
-However, the \(\ket{n}\) that they converge to are not arbitrary: only certain unperturbed eigenstates are “good” states. Without \(\hat{H}_1\), this distinction is irrelevant, but in the perturbed case it will turn out to be important.
-For now, we write \(\ket{n, d}\) to refer to any orthonormal set of vectors in the eigenspace of \(\varepsilon_n\) (not necessarily the “good” ones), and \(\ket{n}\) to denote any linear combination of these. We then take the equation at order \(\lambda^1\) and prepend an arbitrary eigenspace basis vector \(\bra{n, \delta}\):
-\[\begin{aligned} - \matrixel{n, \delta}{\hat{H}_1}{n} + \matrixel{n, \delta}{\hat{H}_0}{\psi_n^{(1)}} - &= E_n^{(1)} \braket{n, \delta}{n} + \varepsilon_n \braket{n, \delta}{\psi_n^{(1)}} -\end{aligned}\]
-Since \(\hat{H}_0\) is Hermitian, we use the same trick as before to reduce the problem to:
-\[\begin{aligned} - \matrixel{n, \delta}{\hat{H}_1}{n} - &= E_n^{(1)} \braket{n, \delta}{n} -\end{aligned}\]
-We express \(\ket{n}\) as a linear combination of the eigenbasis vectors \(\ket{n, d}\) to get:
-\[\begin{aligned} - \sum_{d = 1}^{D} c_d \matrixel{n, \delta}{\hat{H}_1}{n, d} - = E_n^{(1)} \sum_{d = 1}^{D} c_d \braket{n, \delta}{n, d} - = c_{\delta} E_n^{(1)} -\end{aligned}\]
-Let us now interpret the summation terms as matrix elements \(M_{\delta, d}\):
-\[\begin{aligned} - M_{\delta, d} = \matrixel{n, \delta}{\hat{H}_1}{n, d} -\end{aligned}\]
-By varying the value of \(\delta\) from \(1\) to \(D\), we end up with equations of the form:
-\[\begin{aligned} - \begin{bmatrix} - M_{1, 1} & \cdots & M_{1, D} \\ - \vdots & \ddots & \vdots \\ - M_{D, 1} & \cdots & M_{D, D} - \end{bmatrix} - \begin{bmatrix} - c_1 \\ \vdots \\ c_D - \end{bmatrix} - = E_n^{(1)} - \begin{bmatrix} - c_1 \\ \vdots \\ c_D - \end{bmatrix} -\end{aligned}\]
-This is an eigenvalue problem for \(E_n^{(1)}\), where \(c_d\) are the components of the eigenvectors which represent the “good” states. After solving this, let \(\ket{n, g}\) be the resulting “good” states. Then, as long as \(E_n^{(1)}\) is a non-degenerate eigenvalue of \(M\):
-\[\begin{aligned} - \boxed{ - E_{n, g}^{(1)} = \matrixel{n, g}{\hat{H}_1}{n, g} - } -\end{aligned}\]
-Which is the same as in the non-degenerate case! Even better, the first-order wave function correction is also unchanged:
-\[\begin{aligned} - \boxed{ - \ket*{\psi_{n,g}^{(1)}} - = \sum_{m \neq (n, g)} \frac{\matrixel{m}{\hat{H}_1}{n, g}}{\varepsilon_n - \varepsilon_m} \ket{m} - } -\end{aligned}\]
-This works because the matrix \(M\) is diagonal in the \(\ket{n, g}\)-basis, such that when \(\ket{m}\) is any vector \(\ket{n, \gamma}\) in the \(\ket{n}\)-eigenspace (except for \(\ket{n,g}\), which is explicitly excluded), then the corresponding numerator \(\matrixel{n, \gamma}{\hat{H}_1}{n, g} = M_{\gamma, g} = 0\), so the term does not contribute.
-If any of the eigenvalues \(E_n^{(1)}\) of \(M\) are degenerate, then there is still information missing about the components \(c_d\) of the “good” states, in which case we must find them some other way.
-Such an alternative way of determining these “good” states is also of interest even if there is no degeneracy in \(M\), since such a shortcut would allow us to use the formulae from non-degenerate perturbation theory straight away.
-The trick is to find a Hermitian operator \(\hat{L}\) (usually using symmetries of the system) which commutes with both \(\hat{H}_0\) and \(\hat{H}_1\):
-\[\begin{aligned} - [\hat{L}, \hat{H}_0] = [\hat{L}, \hat{H}_1] = 0 -\end{aligned}\]
-So that it shares its eigenstates with \(\hat{H}_0\) (and \(\hat{H}_1\)), meaning all the vectors of the \(D\)-dimensional \(\ket{n}\)-eigenspace are also eigenvectors of \(\hat{L}\).
-The crucial part, however, is that \(\hat{L}\) must be chosen such that \(\ket{n, d_1}\) and \(\ket{n, d_2}\) have distinct eigenvalues \(\ell_1 \neq \ell_2\) for \(d_1 \neq d_2\):
-\[\begin{aligned} - \hat{L} \ket{n, b_1} = \ell_1 \ket{n, b_1} - \qquad - \hat{L} \ket{n, b_2} = \ell_2 \ket{n, b_2} -\end{aligned}\]
-When this holds for any orthogonal choice of \(\ket{n, d_1}\) and \(\ket{n, d_2}\), then these specific eigenvectors of \(\hat{L}\) are the “good states”, for any valid choice of \(\hat{L}\).
-