From 6ce0bb9a8f9fd7d169cbb414a9537d68c5290aae Mon Sep 17 00:00:00 2001 From: Prefetch Date: Fri, 14 Oct 2022 23:25:28 +0200 Subject: Initial commit after migration from Hugo --- .../time-dependent-perturbation-theory/index.md | 202 +++++++++++++++++++++ 1 file changed, 202 insertions(+) create mode 100644 source/know/concept/time-dependent-perturbation-theory/index.md (limited to 'source/know/concept/time-dependent-perturbation-theory') diff --git a/source/know/concept/time-dependent-perturbation-theory/index.md b/source/know/concept/time-dependent-perturbation-theory/index.md new file mode 100644 index 0000000..2b80316 --- /dev/null +++ b/source/know/concept/time-dependent-perturbation-theory/index.md @@ -0,0 +1,202 @@ +--- +title: "Time-dependent perturbation theory" +date: 2021-03-07 +categories: +- Physics +- Quantum mechanics +- Perturbation +layout: "concept" +--- + +In quantum mechanics, **time-dependent perturbation theory** exists to deal +with time-varying perturbations to the Schrödinger equation. +This is in contrast to [time-independent perturbation theory](/know/concept/time-independent-perturbation-theory/), +where the perturbation is stationary. + +Let $\hat{H}_0$ be the base time-independent +Hamiltonian, and $\hat{H}_1$ be a time-varying perturbation, with +"bookkeeping" parameter $\lambda$: + +$$\begin{aligned} + \hat{H}(t) = \hat{H}_0 + \lambda \hat{H}_1(t) +\end{aligned}$$ + +We assume that the unperturbed time-independent problem +$\hat{H}_0 \Ket{n} = E_n \Ket{n}$ has already been solved, such that the +full solution is: + +$$\begin{aligned} + \Ket{\Psi_0(t)} = \sum_{n} c_n \Ket{n} \exp(- i E_n t / \hbar) +\end{aligned}$$ + +Since these $\Ket{n}$ form a complete basis, the perturbed wave function +can be written in the same form, but with time-dependent coefficients $c_n(t)$: + +$$\begin{aligned} + \Ket{\Psi(t)} = \sum_{n} c_n(t) \Ket{n} \exp(- i E_n t / \hbar) +\end{aligned}$$ + +We insert this ansatz in the time-dependent Schrödinger equation, and +reduce it using the known unperturbed time-independent problem: + +$$\begin{aligned} + 0 + &= \hat{H}_0 \Ket{\Psi(t)} + \lambda \hat{H}_1 \Ket{\Psi(t)} - i \hbar \dv{}{t}\Ket{\Psi(t)} + \\ + &= \sum_{n} + \Big( c_n \hat{H}_0 \Ket{n} + \lambda c_n \hat{H}_1 \Ket{n} - c_n E_n \Ket{n} - i \hbar \dv{c_n}{t} \Ket{n} \Big) \exp(- i E_n t / \hbar) + \\ + &= \sum_{n} \Big( \lambda c_n \hat{H}_1 \Ket{n} - i \hbar \dv{c_n}{t} \Ket{n} \Big) \exp(- i E_n t / \hbar) +\end{aligned}$$ + +We then take the inner product with an arbitrary stationary basis state $\Ket{m}$: + +$$\begin{aligned} + 0 + &= \sum_{n} \Big( \lambda c_n \matrixel{m}{\hat{H}_1}{n} - i \hbar \dv{c_n}{t} \Inprod{m}{n} \Big) \exp(- i E_n t / \hbar) +\end{aligned}$$ + +Thanks to orthonormality, this removes the latter term from the summation: + +$$\begin{aligned} + i \hbar \dv{c_m}{t} \exp(- i E_m t / \hbar) + &= \lambda \sum_{n} c_n \matrixel{m}{\hat{H}_1}{n} \exp(- i E_n t / \hbar) +\end{aligned}$$ + +We divide by the left-hand exponential and define +$\omega_{mn} \equiv (E_m - E_n) / \hbar$ to get: + +$$\begin{aligned} + \boxed{ + i \hbar \dv{c_m}{t} + = \lambda \sum_{n} c_n(t) \matrixel{m}{\hat{H}_1(t)}{n} \exp(i \omega_{mn} t) + } +\end{aligned}$$ + +So far, we have not invoked any approximation, +so we can analytically find $c_n(t)$ for some simple systems. +Furthermore, it is useful to write this equation in integral form instead: + +$$\begin{aligned} + c_m(t) + = c_m(0) - \lambda \frac{i}{\hbar} \sum_{n} \int_0^t c_n(\tau) \matrixel{m}{\hat{H}_1(\tau)}{n} \exp(i \omega_{mn} \tau) \dd{\tau} +\end{aligned}$$ + +If this cannot be solved exactly, we must approximate it. We expand +$c_m(t)$ in the usual way, with the initial condition $c_m^{(j)}(0) = 0$ +for $j > 0$: + +$$\begin{aligned} + c_m(t) = c_m^{(0)} + \lambda c_m^{(1)}(t) + \lambda^2 c_m^{(2)}(t) + ... +\end{aligned}$$ + +We then insert this into the integral and collect the non-zero orders of $\lambda$: + +$$\begin{aligned} + c_m^{(1)}(t) + &= - \frac{i}{\hbar} \sum_{n} \int_0^t c_n^{(0)} \matrixel{m}{\hat{H}_1(\tau)}{n} \exp(i \omega_{mn} \tau) \dd{\tau} + \\ + c_m^{(2)}(t) + &= - \frac{i}{\hbar} \sum_{n} + \int_0^t c_n^{(1)}(\tau) \matrixel{m}{\hat{H}_1(\tau)}{n} \exp(i \omega_{mn} \tau) \dd{\tau} + \\ + c_m^{(3)}(t) + &= - \frac{i}{\hbar} \sum_{n} + \int_0^t c_n^{(2)}(\tau) \matrixel{m}{\hat{H}_1(\tau)}{n} \exp(i \omega_{mn} \tau) \dd{\tau} +\end{aligned}$$ + +And so forth. The pattern here is clear: we can calculate the $(j\!+\!1)$th +correction using only our previous result for the $j$th correction. +We cannot go any further than this without considering a specific perturbation $\hat{H}_1(t)$. + + +## Sinusoidal perturbation + +Arguably the most important perturbation +is a sinusoidally-varying potential, which represents +e.g. incoming electromagnetic waves, +or an AC voltage being applied to the system. +In this case, $\hat{H}_1$ has the following form: + +$$\begin{aligned} + \hat{H}_1(\vec{r}, t) + \equiv V(\vec{r}) \sin(\omega t) + = \frac{1}{2 i} V(\vec{r}) \: \big( \exp(i \omega t) - \exp(-i \omega t) \big) +\end{aligned}$$ + +We abbreviate $V_{mn} = \matrixel{m}{V}{n}$, +and take the first-order correction formula: + +$$\begin{aligned} + c_m^{(1)}(t) + &= - \frac{1}{2 \hbar} \sum_{n} V_{mn} c_n^{(0)} + \int_0^t \exp\!\big(i \tau (\omega_{mn} \!+\! \omega)\big) - \exp\!\big(i \tau (\omega_{mn} \!-\! \omega)\big) \dd{\tau} + \\ + &= \frac{i}{2 \hbar} \sum_{n} V_{mn} c_n^{(0)} + \bigg( \frac{\exp\!\big(i t (\omega_{mn} \!+\! \omega) \big) - 1}{\omega_{mn} + \omega} + + \frac{\exp\!\big(i t (\omega_{mn} \!-\! \omega) \big) - 1}{\omega_{mn} - \omega} \bigg) +\end{aligned}$$ + +For simplicity, we let the system start in a known state $\Ket{a}$, +such that $c_n^{(0)} = \delta_{na}$, +and we assume that the driving frequency is close to resonance $\omega \approx \omega_{ma}$, +such that the second term dominates the first, which can then be neglected. +We thus get: + +$$\begin{aligned} + c_m^{(1)}(t) + &= i \frac{V_{ma}}{2 \hbar} \frac{\exp\!\big(i t (\omega_{ma} \!-\! \omega) \big) - 1}{\omega_{ma} - \omega} + \\ + &= i \frac{V_{ma}}{2 \hbar} + \frac{\exp\!\big(i t (\omega_{ma} \!-\! \omega) / 2 \big) - \exp\!\big(\!-\! i t (\omega_{ma} \!-\! \omega) / 2 \big)}{\omega_{ma} - \omega} + \: \exp\!\big(i t (\omega_{ma} \!-\! \omega) / 2 \big) + \\ + &= - \frac{V_{ma}}{\hbar} + \frac{\sin\!\big( t (\omega_{ma} \!-\! \omega) / 2 \big)}{\omega_{ma} - \omega} + \: \exp\!\big(i t (\omega_{ma} \!-\! \omega) / 2 \big) +\end{aligned}$$ + +Taking the norm squared yields the **transition probability**: +the probability that a particle that started in state $\Ket{a}$ +will be found in $\Ket{m}$ at time $t$: + +$$\begin{aligned} + \boxed{ + P_{a \to m} + = |c_m^{(1)}(t)|^2 + = \frac{|V_{ma}|^2}{\hbar^2} \frac{\sin^2\!\big( (\omega_{ma} - \omega) t / 2 \big)}{(\omega_{ma} - \omega)^2} + } +\end{aligned}$$ + +The result would be the same if $\hat{H}_1 \equiv V \cos(\omega t)$. +However, if instead $\hat{H}_1 \equiv V \exp(- i \omega t)$, +the result is larger by a factor of $4$, +which can cause confusion when comparing literature. + +In any case, the probability oscillates as a function of $t$ +with period $T = 2 \pi / (\omega_{ma} \!-\! \omega)$, +so after one period the particle is back in $\Ket{a}$, +and after $T/2$ the particle is in $\Ket{b}$. +See [Rabi oscillation](/know/concept/rabi-oscillation/) +for a more accurate treatment of this "flopping" behaviour. + +However, when regarded as a function of $\omega$, +the probability takes the form of +a sinc-function centred around $(\omega_{ma} \!-\! \omega)$, +so it is highest for transitions with energy $\hbar \omega = E_m \!-\! E_a$. + +Also note that the sinc-distribution becomes narrower over time, +which roughly means that it takes some time +for the system to "notice" that +it is being driven periodically. +In other words, there is some "inertia" to it. + + + +## References +1. D.J. Griffiths, D.F. Schroeter, + *Introduction to quantum mechanics*, 3rd edition, + Cambridge. +2. R. Shankar, + *Principles of quantum mechanics*, 2nd edition, + Springer. -- cgit v1.2.3