1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
|
---
title: "Time-dependent perturbation theory"
firstLetter: "T"
publishDate: 2021-03-07
categories:
- Physics
- Quantum mechanics
- Perturbation
date: 2021-03-07T11:08:14+01:00
draft: false
markup: pandoc
---
# Time-dependent perturbation theory
In quantum mechanics, **time-dependent perturbation theory** exists to deal
with time-varying perturbations to the Schrödinger equation.
This is in contrast to [time-independent perturbation theory](/know/concept/time-independent-perturbation-theory/),
where the perturbation is stationary.
Let $\hat{H}_0$ be the base time-independent
Hamiltonian, and $\hat{H}_1$ be a time-varying perturbation, with
"bookkeeping" parameter $\lambda$:
$$\begin{aligned}
\hat{H}(t) = \hat{H}_0 + \lambda \hat{H}_1(t)
\end{aligned}$$
We assume that the unperturbed time-independent problem
$\hat{H}_0 \ket{n} = E_n \ket{n}$ has already been solved, such that the
full solution is:
$$\begin{aligned}
\ket{\Psi_0(t)} = \sum_{n} c_n \ket{n} \exp\!(- i E_n t / \hbar)
\end{aligned}$$
Since these $\ket{n}$ form a complete basis, the perturbed wave function
can be written in the same form, but with time-dependent coefficients $c_n(t)$:
$$\begin{aligned}
\ket{\Psi(t)} = \sum_{n} c_n(t) \ket{n} \exp\!(- i E_n t / \hbar)
\end{aligned}$$
We insert this ansatz in the time-dependent Schrödinger equation, and
reduce it using the known unperturbed time-independent problem:
$$\begin{aligned}
0
&= \hat{H}_0 \ket{\Psi(t)} + \lambda \hat{H}_1 \ket{\Psi(t)} - i \hbar \dv{t} \ket{\Psi(t)}
\\
&= \sum_{n}
\Big( c_n \hat{H}_0 \ket{n} + \lambda c_n \hat{H}_1 \ket{n} - c_n E_n \ket{n} - i \hbar \dv{c_n}{t} \ket{n} \Big) \exp\!(- i E_n t / \hbar)
\\
&= \sum_{n} \Big( \lambda c_n \hat{H}_1 \ket{n} - i \hbar \dv{c_n}{t} \ket{n} \Big) \exp\!(- i E_n t / \hbar)
\end{aligned}$$
We then take the inner product with an arbitrary stationary basis state $\ket{m}$:
$$\begin{aligned}
0
&= \sum_{n} \Big( \lambda c_n \matrixel{m}{\hat{H}_1}{n} - i \hbar \frac{d c_n}{dt} \braket{m}{n} \Big) \exp\!(- i E_n t / \hbar)
\end{aligned}$$
Thanks to orthonormality, this removes the latter term from the summation:
$$\begin{aligned}
i \hbar \frac{d c_m}{dt} \exp\!(- i E_m t / \hbar)
&= \lambda \sum_{n} c_n \matrixel{m}{\hat{H}_1}{n} \exp\!(- i E_n t / \hbar)
\end{aligned}$$
We divide by the left-hand exponential and define
$\omega_{mn} \equiv (E_m - E_n) / \hbar$ to get:
$$\begin{aligned}
\boxed{
i \hbar \frac{d c_m}{dt}
= \lambda \sum_{n} c_n(t) \matrixel{m}{\hat{H}_1(t)}{n} \exp\!(i \omega_{mn} t)
}
\end{aligned}$$
So far, we have not invoked any approximation,
so we can analytically find $c_n(t)$ for some simple systems.
Furthermore, it is useful to write this equation in integral form instead:
$$\begin{aligned}
c_m(t)
= c_m(0) - \lambda \frac{i}{\hbar} \sum_{n} \int_0^t c_n(\tau) \matrixel{m}{\hat{H}_1(\tau)}{n} \exp\!(i \omega_{mn} \tau) \dd{\tau}
\end{aligned}$$
If this cannot be solved exactly, we must approximate it. We expand
$c_m(t)$ in the usual way, with the initial condition $c_m^{(j)}(0) = 0$
for $j > 0$:
$$\begin{aligned}
c_m(t) = c_m^{(0)} + \lambda c_m^{(1)}(t) + \lambda^2 c_m^{(2)}(t) + ...
\end{aligned}$$
We then insert this into the integral and collect the non-zero orders of $\lambda$:
$$\begin{aligned}
c_m^{(1)}(t)
&= - \frac{i}{\hbar} \sum_{n} \int_0^t c_n^{(0)} \matrixel{m}{\hat{H}_1(\tau)}{n} \exp\!(i \omega_{mn} \tau) \dd{\tau}
\\
c_m^{(2)}(t)
&= - \frac{i}{\hbar} \sum_{n}
\int_0^t c_n^{(1)}(\tau) \matrixel{m}{\hat{H}_1(\tau)}{n} \exp\!(i \omega_{mn} \tau) \dd{\tau}
\\
c_m^{(3)}(t)
&= - \frac{i}{\hbar} \sum_{n}
\int_0^t c_n^{(2)}(\tau) \matrixel{m}{\hat{H}_1(\tau)}{n} \exp\!(i \omega_{mn} \tau) \dd{\tau}
\end{aligned}$$
And so forth. The pattern here is clear: we can calculate the $(j\!+\!1)$th
correction using only our previous result for the $j$th correction.
We cannot go any further than this without considering a specific perturbation $\hat{H}_1(t)$.
## Sinusoidal perturbation
Arguably the most important perturbation
is a sinusoidally-varying potential, which represents
e.g. incoming electromagnetic waves,
or an AC voltage being applied to the system.
In this case, $\hat{H}_1$ has the following form:
$$\begin{aligned}
\hat{H}_1(\vec{r}, t)
\equiv V(\vec{r}) \sin\!(\omega t)
= \frac{1}{2 i} V(\vec{r}) \: \big( \exp\!(i \omega t) - \exp\!(-i \omega t) \big)
\end{aligned}$$
We abbreviate $V_{mn} = \matrixel{m}{V}{n}$,
and take the first-order correction formula:
$$\begin{aligned}
c_m^{(1)}(t)
&= - \frac{1}{2 \hbar} \sum_{n} V_{mn} c_n^{(0)}
\int_0^t \exp\!\big(i \tau (\omega_{mn} \!+\! \omega)\big) - \exp\big(i \tau (\omega_{mn} \!-\! \omega)\big) \dd{\tau}
\\
&= \frac{i}{2 \hbar} \sum_{n} V_{mn} c_n^{(0)}
\bigg( \frac{\exp\!\big(i t (\omega_{mn} \!+\! \omega) \big) - 1}{\omega_{mn} + \omega}
+ \frac{\exp\!\big(i t (\omega_{mn} \!-\! \omega) \big) - 1}{\omega_{mn} - \omega} \bigg)
\end{aligned}$$
For simplicity, we let the system start in a known state $\ket{a}$,
such that $c_n^{(0)} = \delta_{na}$,
and we assume that the driving frequency is close to resonance $\omega \approx \omega_{ma}$,
such that the second term dominates the first, which can then be neglected.
We thus get:
$$\begin{aligned}
c_m^{(1)}(t)
&= i \frac{V_{ma}}{2 \hbar} \frac{\exp\!\big(i t (\omega_{ma} \!-\! \omega) \big) - 1}{\omega_{ma} - \omega}
\\
&= i \frac{V_{ma}}{2 \hbar}
\frac{\exp\!\big(i t (\omega_{ma} \!-\! \omega) / 2 \big) - \exp\!\big(\!-\! i t (\omega_{ma} \!-\! \omega) / 2 \big)}{\omega_{ma} - \omega}
\: \exp\!\big(i t (\omega_{ma} \!-\! \omega) / 2 \big)
\\
&= - \frac{V_{ma}}{\hbar}
\frac{\sin\!\big( t (\omega_{ma} \!-\! \omega) / 2 \big)}{\omega_{ma} - \omega}
\: \exp\!\big(i t (\omega_{ma} \!-\! \omega) / 2 \big)
\end{aligned}$$
Taking the norm squared yields the **transition probability**:
the probability that a particle that started in state $\ket{a}$
will be found in $\ket{m}$ at time $t$:
$$\begin{aligned}
\boxed{
P_{a \to m}
= |c_m^{(1)}(t)|^2
= \frac{|V_{ma}|^2}{\hbar^2} \frac{\sin^2\!\big( (\omega_{ma} - \omega) t / 2 \big)}{(\omega_{ma} - \omega)^2}
}
\end{aligned}$$
The result would be the same if $\hat{H}_1 \equiv V \cos\!(\omega t)$.
However, if instead $\hat{H}_1 \equiv V \exp\!(- i \omega t)$,
the result is larger by a factor of $4$,
which can cause confusion when comparing literature.
In any case, the probability oscillates as a function of $t$
with period $T = 2 \pi / (\omega_{ma} \!-\! \omega)$,
so after one period the particle is back in $\ket{a}$,
and after $T/2$ the particle is in $\ket{b}$.
See [Rabi oscillation](/know/concept/rabi-oscillation/)
for a more accurate treatment of this "flopping" behaviour.
However, when regarded as a function of $\omega$,
the probability takes the form of
a sinc-function centred around $(\omega_{ma} \!-\! \omega)$,
so it is highest for transitions with energy $\hbar \omega = E_m \!-\! E_a$.
Also note that the sinc-distribution becomes narrower over time,
which roughly means that it takes some time
for the system to "notice" that
it is being driven periodically.
In other words, there is some "inertia" to it.
## References
1. D.J. Griffiths, D.F. Schroeter,
*Introduction to quantum mechanics*, 3rd edition,
Cambridge.
2. R. Shankar,
*Principles of quantum mechanics*, 2nd edition,
Springer.
|