summaryrefslogtreecommitdiff
path: root/source/know/concept
diff options
context:
space:
mode:
authorPrefetch2022-12-17 18:19:26 +0100
committerPrefetch2022-12-17 18:20:50 +0100
commita39bb3b8aab1aeb4fceaedc54c756703819776c3 (patch)
treeb21ecb4677745fb8c275e54f2ad9d4c2e775a3d8 /source/know/concept
parent49cc36648b489f7d1c75e1fde79f0990e08dd514 (diff)
Rewrite "Lagrange multiplier", various improvements
Diffstat (limited to 'source/know/concept')
-rw-r--r--source/know/concept/convolution-theorem/index.md18
-rw-r--r--source/know/concept/debye-length/index.md2
-rw-r--r--source/know/concept/electric-dipole-approximation/index.md4
-rw-r--r--source/know/concept/gronwall-bellman-inequality/index.md17
-rw-r--r--source/know/concept/lagrange-multiplier/index.md187
-rw-r--r--source/know/concept/material-derivative/index.md4
-rw-r--r--source/know/concept/parsevals-theorem/index.md20
-rw-r--r--source/know/concept/rabi-oscillation/index.md41
-rw-r--r--source/know/concept/self-steepening/index.md15
-rw-r--r--source/know/concept/shors-algorithm/index.md9
-rw-r--r--source/know/concept/thermodynamic-potential/index.md10
-rw-r--r--source/know/concept/toffoli-gate/index.md4
12 files changed, 193 insertions, 138 deletions
diff --git a/source/know/concept/convolution-theorem/index.md b/source/know/concept/convolution-theorem/index.md
index 510417a..d10d85d 100644
--- a/source/know/concept/convolution-theorem/index.md
+++ b/source/know/concept/convolution-theorem/index.md
@@ -36,9 +36,9 @@ rearrange the integrals:
$$\begin{aligned}
\hat{\mathcal{F}}{}^{-1}\{\tilde{f}(k) \: \tilde{g}(k)\}
- &= B \int_{-\infty}^\infty \tilde{f}(k) \Big( A \int_{-\infty}^\infty g(x') \exp(i s k x') \dd{x'} \Big) \exp(-i s k x) \dd{k}
+ &= B \int_{-\infty}^\infty \tilde{f}(k) \Big( A \int_{-\infty}^\infty g(x') \: e^{i s k x'} \dd{x'} \Big) e^{-i s k x} \dd{k}
\\
- &= A \int_{-\infty}^\infty g(x') \Big( B \int_{-\infty}^\infty \tilde{f}(k) \exp(- i s k (x - x')) \dd{k} \Big) \dd{x'}
+ &= A \int_{-\infty}^\infty g(x') \Big( B \int_{-\infty}^\infty \tilde{f}(k) \: e^{-i s k (x - x')} \dd{k} \Big) \dd{x'}
\\
&= A \int_{-\infty}^\infty g(x') \: f(x - x') \dd{x'}
= A \cdot (f * g)(x)
@@ -49,9 +49,9 @@ this time starting from a product in the $$x$$-domain:
$$\begin{aligned}
\hat{\mathcal{F}}\{f(x) \: g(x)\}
- &= A \int_{-\infty}^\infty f(x) \Big( B \int_{-\infty}^\infty \tilde{g}(k') \exp(- i s x k') \dd{k'} \Big) \exp(i s k x) \dd{x}
+ &= A \int_{-\infty}^\infty f(x) \Big( B \int_{-\infty}^\infty \tilde{g}(k') \: e^{-i s x k'} \dd{k'} \Big) e^{i s k x} \dd{x}
\\
- &= B \int_{-\infty}^\infty \tilde{g}(k') \Big( A \int_{-\infty}^\infty f(x) \exp(i s x (k - k')) \dd{x} \Big) \dd{k'}
+ &= B \int_{-\infty}^\infty \tilde{g}(k') \Big( A \int_{-\infty}^\infty f(x) \: e^{i s x (k - k')} \dd{x} \Big) \dd{k'}
\\
&= B \int_{-\infty}^\infty \tilde{g}(k') \: \tilde{f}(k - k') \dd{k'}
= B \cdot (\tilde{f} * \tilde{g})(k)
@@ -86,20 +86,20 @@ because we set both $$f(t)$$ and $$g(t)$$ to zero for $$t < 0$$:
$$\begin{aligned}
\hat{\mathcal{L}}\{(f * g)(t)\}
- &= \int_0^\infty \Big( \int_0^\infty g(t') f(t - t') \dd{t'} \Big) \exp(- s t) \dd{t}
+ &= \int_0^\infty \Big( \int_0^\infty g(t') \: f(t - t') \dd{t'} \Big) e^{-s t} \dd{t}
\\
- &= \int_0^\infty \Big( \int_0^\infty f(t - t') \exp(- s t) \dd{t} \Big) g(t') \dd{t'}
+ &= \int_0^\infty \Big( \int_0^\infty f(t - t') \: e^{-s t} \dd{t} \Big) g(t') \dd{t'}
\end{aligned}$$
Then we define a new integration variable $$\tau = t - t'$$, yielding:
$$\begin{aligned}
\hat{\mathcal{L}}\{(f * g)(t)\}
- &= \int_0^\infty \Big( \int_0^\infty f(\tau) \exp(- s (\tau + t')) \dd{\tau} \Big) g(t') \dd{t'}
+ &= \int_0^\infty \Big( \int_0^\infty f(\tau) \: e^{-s (\tau + t')} \dd{\tau} \Big) g(t') \dd{t'}
\\
- &= \int_0^\infty \Big( \int_0^\infty f(\tau) \exp(- s \tau) \dd{\tau} \Big) g(t') \exp(- s t') \dd{t'}
+ &= \int_0^\infty \Big( \int_0^\infty f(\tau) \: e^{-s \tau} \dd{\tau} \Big) g(t') \: e^{-s t'} \dd{t'}
\\
- &= \int_0^\infty \tilde{f}(s) \: g(t') \exp(- s t') \dd{t'}
+ &= \int_0^\infty \tilde{f}(s) \: g(t') \: e^{-s t'} \dd{t'}
= \tilde{f}(s) \: \tilde{g}(s)
\end{aligned}$$
{% include proof/end.html id="proof-laplace" %}
diff --git a/source/know/concept/debye-length/index.md b/source/know/concept/debye-length/index.md
index e226ad9..5961c4f 100644
--- a/source/know/concept/debye-length/index.md
+++ b/source/know/concept/debye-length/index.md
@@ -123,7 +123,7 @@ This treatment only makes sense
if the plasma is sufficiently dense,
such that there is a large number of particles
in a sphere with radius $$\lambda_D$$.
-This corresponds to a large [Coulomb logarithm](/know/concept/coulomb-logarithm/) $$\ln\!(\Lambda)$$:
+This corresponds to a large [Coulomb logarithm](/know/concept/coulomb-logarithm/) $$\ln(\Lambda)$$:
$$\begin{aligned}
1 \ll \frac{4 \pi}{3} n_0 \lambda_D^3 = \frac{2}{9} \Lambda
diff --git a/source/know/concept/electric-dipole-approximation/index.md b/source/know/concept/electric-dipole-approximation/index.md
index 7c710ec..35cf00c 100644
--- a/source/know/concept/electric-dipole-approximation/index.md
+++ b/source/know/concept/electric-dipole-approximation/index.md
@@ -30,6 +30,8 @@ so that $$\vb{A} \cdot \vu{P} = \vu{P} \cdot \vb{A}$$:
$$\begin{aligned}
\comm{\vb{A}}{\vu{P}} \psi
+ &= (\vb{A} \cdot \vu{P} - \vu{P} \cdot \vb{A}) \psi
+ \\
&= -i \hbar \vb{A} \cdot (\nabla \psi) + i \hbar \nabla \cdot (\vb{A} \psi)
\\
&= i \hbar (\nabla \cdot \vb{A}) \psi
@@ -75,7 +77,7 @@ $$\begin{aligned}
Where $$\vb{E}_0 = \omega \vb{A}_0$$.
Let us restrict ourselves to visible light,
whose wavelength $$2 \pi / |\vb{k}| \sim 10^{-6} \:\mathrm{m}$$.
-Meanwhile, an atomic orbital is several Bohr $$\sim 10^{-10} \:\mathrm{m}$$,
+Meanwhile, an atomic orbital is several Bohr radii $$\sim 10^{-10} \:\mathrm{m}$$,
so $$\vb{k} \cdot \vb{x}$$ is negligible:
$$\begin{aligned}
diff --git a/source/know/concept/gronwall-bellman-inequality/index.md b/source/know/concept/gronwall-bellman-inequality/index.md
index da1bcad..0d6db71 100644
--- a/source/know/concept/gronwall-bellman-inequality/index.md
+++ b/source/know/concept/gronwall-bellman-inequality/index.md
@@ -7,8 +7,8 @@ categories:
layout: "concept"
---
-Suppose we have a first-order ordinary differential equation
-for some function $$u(t)$$, and that it can be shown from this equation
+Suppose we have a first-order ordinary differential equation for some function $$u(t)$$,
+and assume that we can prove from this equation
that the derivative $$u'(t)$$ is bounded as follows:
$$\begin{aligned}
@@ -28,7 +28,7 @@ $$\begin{aligned}
{% include proof/start.html id="proof-original" -%}
-We define $$w(t)$$ to equal the upper bounds above
+We define $$w(t)$$ as equal to the upper bounds above
on both $$w'(t)$$ and $$w(t)$$ itself:
$$\begin{aligned}
@@ -40,7 +40,7 @@ $$\begin{aligned}
\end{aligned}$$
Where $$w(0) = u(0)$$.
-The goal is to show the following for all $$t$$:
+Then the goal is to show the following for all $$t$$:
$$\begin{aligned}
\frac{u(t)}{w(t)} \le 1
@@ -102,7 +102,7 @@ $$\begin{aligned}
\exp\!\bigg( \!-\!\! \int_0^t \beta(s) \dd{s} \bigg)
\end{aligned}$$
-The parenthesized expression it bounded from above by $$\alpha(t)$$,
+The parenthesized expression is bounded from above by $$\alpha(t)$$,
thanks to the condition that $$u(t)$$ is assumed to satisfy,
for the Grönwall-Bellman inequality to be true:
@@ -131,7 +131,8 @@ $$\begin{aligned}
&\le \int_0^t \alpha(s) \: \beta(s) \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg)
\end{aligned}$$
-Insert this into the condition under which the Grönwall-Bellman inequality holds.
+This yields the desired result after inserting it
+into the condition under which the Grönwall-Bellman inequality holds.
{% include proof/end.html id="proof-integral" %}
@@ -158,14 +159,14 @@ $$\begin{aligned}
&\le \alpha(t) + \alpha(t) \int_0^t \beta(s) \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg) \dd{s}
\end{aligned}$$
-Now, consider the following straightfoward identity, involving the exponential:
+Now, consider the following straightforward identity, involving the exponential:
$$\begin{aligned}
\dv{}{s}\exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg)
&= - \beta(s) \exp\!\bigg( \int_s^t \beta(r) \dd{r} \bigg)
\end{aligned}$$
-By inserting this into Grönwall-Bellman inequality, we arrive at:
+By inserting this into normal Grönwall-Bellman inequality, we arrive at:
$$\begin{aligned}
u(t)
diff --git a/source/know/concept/lagrange-multiplier/index.md b/source/know/concept/lagrange-multiplier/index.md
index 8ee1054..a0b22aa 100644
--- a/source/know/concept/lagrange-multiplier/index.md
+++ b/source/know/concept/lagrange-multiplier/index.md
@@ -1,7 +1,7 @@
---
title: "Lagrange multiplier"
sort_title: "Lagrange multiplier"
-date: 2021-03-02
+date: 2022-12-17 # Originally 2021-03-02, major rewrite
categories:
- Mathematics
- Physics
@@ -9,108 +9,145 @@ layout: "concept"
---
The method of **Lagrange multipliers** or **undetermined multipliers**
-is a technique for optimizing (i.e. finding the extrema of)
-a function $$f(x, y, z)$$,
-subject to a given constraint $$\phi(x, y, z) = C$$,
-where $$C$$ is a constant.
-
-If we ignore the constraint $$\phi$$,
-optimizing $$f$$ simply comes down to finding stationary points:
+is a technique for optimizing (i.e. finding extrema of)
+a function $$f$$ subject to **equality constraints**.
+For example, in 2D, the goal is to maximize/minimize $$f(x, y)$$
+while satisfying $$g(x, y) = 0$$.
+We assume that $$f$$ and $$g$$ are both continuous
+and have continuous first derivatives,
+and that their domain is all of $$\mathbb{R}$$.
+
+Side note: many authors write that Lagrange multipliers
+can be used for constraints of the form $$g(x, y) = c$$ for a constant $$c$$.
+However, this method technically requires $$c = 0$$.
+This issue is easy to solve: given $$g = c$$,
+simply define $$\tilde{g} \equiv g - c = 0$$
+and use that as constraint instead.
+
+Before introducing $$g$$,
+optimizing $$f$$ comes down to finding its stationary points:
$$\begin{aligned}
- 0 &= \dd{f} = f_x \dd{x} + f_y \dd{y} + f_z \dd{z}
+ 0
+ &= \nabla f
+ = \bigg( \pdv{f}{x}, \pdv{f}{y} \bigg)
\end{aligned}$$
-This problem is easy:
-$$\dd{x}$$, $$\dd{y}$$, and $$\dd{z}$$ are independent and arbitrary,
-so all we need to do is find the roots of
-the partial derivatives $$f_x$$, $$f_y$$ and $$f_z$$,
-which we respectively call $$x_0$$, $$y_0$$ and $$z_0$$,
-and then the extremum is simply $$(x_0, y_0, z_0)$$.
-
-But the constraint $$\phi$$, over which we have no control,
-adds a relation between $$\dd{x}$$, $$\dd{y}$$, and $$\dd{z}$$,
-so if two are known, the third is given by $$\phi = C$$.
-The problem is then a system of equations:
+This problem is easy: the two dimensions can be handled independently,
+so all we need to do is find the roots of the partial derivatives.
+
+However, adding $$g$$ makes the problem much more complicated:
+points with $$\nabla f = 0$$ might not satisfy $$g = 0$$,
+and points where $$g = 0$$ might not have $$\nabla f = 0$$.
+The dimensions also cannot be handled independently anymore,
+since they are implicitly related by $$g$$.
+
+Imagine a contour plot of $$g(x, y)$$.
+The trick is this: if we follow a contour of $$g = 0$$,
+the highest and lowest values of $$f$$ along the way
+are the desired local extrema.
+Recall our assumption that $$\nabla f$$ is continuous:
+hence *along our contour* $$f$$ is slowly-varying
+in the close vicinity of each such point,
+and stationary at the point itself.
+We thus have two categories of extrema:
+
+1. $$\nabla f = 0$$ there,
+ i.e. $$f$$ is slowly-varying along *all* directions around the point.
+ In other words, a stationary point of $$f$$
+ coincidentally lies on a contour of $$g = 0$$.
+
+2. The contours of $$f$$ and $$g$$ are parallel around the point.
+ By definition, $$f$$ is stationary along each of its contours,
+ so when we find that $$f$$ is stationary at a point on our $$g = 0$$ path,
+ it means we touched a contour of $$f$$.
+ Obviously, each point of $$f$$ lies on *some* contour,
+ but if they are not parallel,
+ then $$f$$ is increasing or decreasing along our path,
+ so this is not an extremum and we must continue our search.
+
+What about the edge case that $$g = 0$$ and $$\nabla g = 0$$ in the same point,
+i.e. we locally have no contour to follow?
+Do we just take whatever value $$f$$ has there?
+No, by convention, we do not,
+because this does not really count as *optimizing* $$f$$.
+
+Now, in the 2nd category, parallel contours imply parallel gradients,
+i.e. $$\nabla f$$ and $$\nabla g$$ differ only in magnitude, not direction.
+Formally:
$$\begin{aligned}
- 0 &= \dd{f} = f_x \dd{x} + f_y \dd{y} + f_z \dd{z}
- \\
- 0 &= \dd{\phi} = \phi_x \dd{x} + \phi_y \dd{y} + \phi_z \dd{z}
+ \nabla f = -\lambda \nabla g
\end{aligned}$$
-Solving this directly would be a delicate balancing act
-of all the partial derivatives.
-
-To help us solve this, we introduce a "dummy" parameter $$\lambda$$,
-the so-called **Lagrange multiplier**,
-and contruct a new function $$L$$ given by:
-
-$$\begin{aligned}
- L(x, y, z) = f(x, y, z) + \lambda \phi(x, y, z)
-\end{aligned}$$
+Where $$\lambda$$ is the **Lagrange multiplier**
+that quantifies the difference in magnitude between the gradients.
+By setting $$\lambda = 0$$, this equation also handles the 1st category $$\nabla f = 0$$.
+Some authors define $$\lambda$$ with the opposite sign.
-At the extremum, $$\dd{L} = \dd{f} + \lambda \dd{\phi} = 0$$,
-so now the problem is a "single" equation again:
+The method of Lagrange multipliers uses these facts
+to rewrite a constrained $$N$$-dimensional optimization problem
+as an unconstrained $$(N\!+\!1)$$-dimensional optimization problem
+by defining the **Lagrangian function** $$\mathcal{L}$$ as follows:
$$\begin{aligned}
- 0 = \dd{L}
- = (f_x + \lambda \phi_x) \dd{x} + (f_y + \lambda \phi_y) \dd{y} + (f_z + \lambda \phi_z) \dd{z}
+ \boxed{
+ \mathcal{L}(x, y, \lambda)
+ \equiv f(x, y) + \lambda g(x, y)
+ }
\end{aligned}$$
-Assuming $$\phi_z \neq 0$$, we now choose $$\lambda$$ such that $$f_z + \lambda \phi_z = 0$$.
-This choice represents satisfying the constraint,
-so now the remaining $$\dd{x}$$ and $$\dd{y}$$ are independent again,
-and we simply have to find the roots of $$f_x + \lambda \phi_x$$ and $$f_y + \lambda \phi_y$$.
-
-In effect, after introducing $$\lambda$$,
-we have four unknowns $$(x, y, z, \lambda)$$,
-but also four equations:
+Let us do an unconstrained optimization of $$\mathcal{L}$$ as usual,
+by demanding it is stationary:
$$\begin{aligned}
- L_x = L_y = L_z = 0
- \qquad \quad
- \phi = C
+ 0
+ = \nabla \mathcal{L}
+ &= \bigg( \pdv{\mathcal{L}}{x}, \pdv{\mathcal{L}}{y}, \pdv{\mathcal{L}}{\lambda} \bigg)
+ \\
+ &= \bigg( \pdv{f}{x} + \lambda \pdv{g}{x}, \:\:\: \pdv{f}{y} + \lambda \pdv{g}{y}, \:\:\: g \bigg)
\end{aligned}$$
-We are only really interested in the first three unknowns $$(x, y, z)$$,
-so $$\lambda$$ is sometimes called the **undetermined multiplier**,
-since it is just an algebraic helper whose value is irrelevant.
-
-This method generalizes nicely to multiple constraints or more variables:
-suppose that we want to find the extrema of $$f(x_1, ..., x_N)$$
+The last item in this vector represents $$g = 0$$,
+and the others $$\nabla f = -\lambda \nabla g$$ as discussed earlier.
+To solve this equation,
+we assign $$\lambda$$ a value that agrees with it
+(such a value exists for each local extremum
+according to our above discussion of the two categories),
+and then find the locations $$(x, y)$$ that satisfy it.
+However, as usual for optimization problems,
+this method only finds *local* extrema *and* saddle points;
+it is a necessary condition for optimality, but not sufficient.
+
+We often assign $$\lambda$$ an algebraic expression rather than a value,
+usually without even bothering to calculate its final actual value.
+In fact, in some cases, $$\lambda$$'s only function is to help us reason
+about the interdependence of a system of equations
+(see [example 3](https://en.wikipedia.org/wiki/Lagrange_multiplier#Example_3:_Entropy) on Wikipedia);
+then $$\lambda$$ is not even given an expression!
+Hence it is sometimes also called an *undetermined multiplier*.
+
+This method generalizes nicely to multiple constraints or more variables.
+Suppose that we want to find the extrema of $$f(x_1, ..., x_N)$$
subject to $$M < N$$ conditions:
$$\begin{aligned}
- \phi_1(x_1, ..., x_N) = C_1 \qquad \cdots \qquad \phi_M(x_1, ..., x_N) = C_M
-\end{aligned}$$
-
-This once again turns into a delicate system of $$M+1$$ equations to solve:
-
-$$\begin{aligned}
- 0 &= \dd{f} = f_{x_1} \dd{x_1} + ... + f_{x_N} \dd{x_N}
- \\
- 0 &= \dd{\phi_1} = \phi_{1, x_1} \dd{x_1} + ... + \phi_{1, x_N} \dd{x_N}
- \\
- &\vdots
- \\
- 0 &= \dd{\phi_M} = \phi_{M, x_1} \dd{x_1} + ... + \phi_{M, x_N} \dd{x_N}
+ g_1(x_1, ..., x_N) = c_1
+ \qquad \cdots \qquad
+ g_M(x_1, ..., x_N) = c_M
\end{aligned}$$
Then we introduce $$M$$ Lagrange multipliers $$\lambda_1, ..., \lambda_M$$
-and define $$L(x_1, ..., x_N)$$:
+and define $$\mathcal{L}(x_1, ..., x_N)$$:
$$\begin{aligned}
- L = f + \sum_{m = 1}^M \lambda_m \phi_m
+ \mathcal{L} \equiv f + \sum_{m = 1}^M \lambda_m g_m
\end{aligned}$$
-As before, we set $$\dd{L} = 0$$ and choose the multipliers $$\lambda_1, ..., \lambda_M$$
-to eliminate $$M$$ of its $$N$$ terms:
+As before, we set $$\nabla \mathcal{L} = 0$$ and choose the multipliers
+$$\lambda_1, ..., \lambda_M$$ to satisfy the resulting system of $$(N\!+\!M)$$ 1D equations,
+and then find the coordinates of the extrema.
-$$\begin{aligned}
- 0 = \dd{L}
- = \sum_{n = 1}^N \Big( f_{x_n} + \sum_{m = 1}^M \lambda_m \phi_{x_n} \Big) \dd{x_n}
-\end{aligned}$$
## References
diff --git a/source/know/concept/material-derivative/index.md b/source/know/concept/material-derivative/index.md
index 93e8ad0..7225053 100644
--- a/source/know/concept/material-derivative/index.md
+++ b/source/know/concept/material-derivative/index.md
@@ -16,9 +16,9 @@ e.g. the temperature or pressure,
represented by a scalar field $$f(\va{r}, t)$$.
If the fluid is static, the evolution of $$f$$ is simply $$\ipdv{f}{t}$$,
-since each point of the fluid is motionless.
+since each point is motionless.
However, if the fluid is moving, we have a problem:
-the fluid molecules at position $$\va{r} = \va{r}_0$$ are not necessarily
+the fluid molecules at position $$\va{r} = \va{r}_0$$ are not
the same ones at time $$t = t_0$$ and $$t = t_1$$.
Those molecules take $$f$$ with them as they move,
so we need to account for this transport somehow.
diff --git a/source/know/concept/parsevals-theorem/index.md b/source/know/concept/parsevals-theorem/index.md
index 377f3a1..41e8fed 100644
--- a/source/know/concept/parsevals-theorem/index.md
+++ b/source/know/concept/parsevals-theorem/index.md
@@ -26,20 +26,21 @@ $$\begin{aligned}
{% include proof/start.html id="proof-fourier" -%}
-We insert the inverse FT into the defintion of the inner product:
+We insert the inverse FT into the definition of the inner product:
$$\begin{aligned}
\Inprod{f}{g}
&= \int_{-\infty}^\infty \big( \hat{\mathcal{F}}^{-1}\{\tilde{f}(k)\}\big)^* \: \hat{\mathcal{F}}^{-1}\{\tilde{g}(k)\} \dd{x}
\\
&= B^2 \int
- \Big( \int \tilde{f}^*(k_1) \exp(i s k_1 x) \dd{k_1} \Big)
- \Big( \int \tilde{g}(k) \exp(- i s k x) \dd{k} \Big)
+ \Big( \int \tilde{f}^*(k') \: e^{i s k' x} \dd{k'} \Big)
+ \Big( \int \tilde{g}(k) \: e^{- i s k x} \dd{k} \Big)
\dd{x}
\\
- &= 2 \pi B^2 \iint \tilde{f}^*(k_1) \tilde{g}(k) \Big( \frac{1}{2 \pi} \int_{-\infty}^\infty \exp(i s x (k_1 - k)) \dd{x} \Big) \dd{k_1} \dd{k}
+ &= 2 \pi B^2 \iint \tilde{f}^*(k') \: \tilde{g}(k) \Big( \frac{1}{2 \pi}
+ \int_{-\infty}^\infty e^{i s x (k' - k)} \dd{x} \Big) \dd{k'} \dd{k}
\\
- &= 2 \pi B^2 \iint \tilde{f}^*(k_1) \: \tilde{g}(k) \: \delta(s (k_1 - k)) \dd{k_1} \dd{k}
+ &= 2 \pi B^2 \iint \tilde{f}^*(k') \: \tilde{g}(k) \: \delta\big(s (k' \!-\! k)\big) \dd{k'} \dd{k}
\\
&= \frac{2 \pi B^2}{|s|} \int_{-\infty}^\infty \tilde{f}^*(k) \: \tilde{g}(k) \dd{k}
= \frac{2 \pi B^2}{|s|} \inprod{\tilde{f}}{\tilde{g}}
@@ -54,13 +55,14 @@ $$\begin{aligned}
&= \int_{-\infty}^\infty \big( \hat{\mathcal{F}}\{f(x)\}\big)^* \: \hat{\mathcal{F}}\{g(x)\} \dd{k}
\\
&= A^2 \int
- \Big( \int f^*(x_1) \exp(- i s k x_1) \dd{x_1} \Big)
- \Big( \int g(x) \exp(i s k x) \dd{x} \Big)
+ \Big( \int f^*(x') \: e^{- i s k x'} \dd{x'} \Big)
+ \Big( \int g(x) \: e^{i s k x} \dd{x} \Big)
\dd{k}
\\
- &= 2 \pi A^2 \iint f^*(x_1) g(x) \Big( \frac{1}{2 \pi} \int_{-\infty}^\infty \exp(i s k (x_1 - x)) \dd{k} \Big) \dd{x_1} \dd{x}
+ &= 2 \pi A^2 \iint f^*(x') \: g(x) \Big( \frac{1}{2 \pi}
+ \int_{-\infty}^\infty e^{i s k (x - x')} \dd{k} \Big) \dd{x'} \dd{x}
\\
- &= 2 \pi A^2 \iint f^*(x_1) \: g(x) \: \delta(s (x_1 - x)) \dd{x_1} \dd{x}
+ &= 2 \pi A^2 \iint f^*(x') \: g(x) \: \delta\big(s (x \!-\! x')\big) \dd{x'} \dd{x}
\\
&= \frac{2 \pi A^2}{|s|} \int_{-\infty}^\infty f^*(x) \: g(x) \dd{x}
= \frac{2 \pi A^2}{|s|} \Inprod{f}{g}
diff --git a/source/know/concept/rabi-oscillation/index.md b/source/know/concept/rabi-oscillation/index.md
index 07f8b25..2fcdea8 100644
--- a/source/know/concept/rabi-oscillation/index.md
+++ b/source/know/concept/rabi-oscillation/index.md
@@ -15,11 +15,11 @@ In quantum mechanics, from the derivation of
we know that a time-dependent term $$\hat{H}_1$$ in the Hamiltonian
affects the state as follows,
where $$c_n(t)$$ are the coefficients of the linear combination
-of basis states $$\Ket{n} \exp(-i E_n t / \hbar)$$:
+of basis states $$\Ket{n} e^{-i E_n t / \hbar}$$:
$$\begin{aligned}
i \hbar \dv{c_m}{t}
- = \sum_{n} c_n(t) \matrixel{m}{\hat{H}_1}{n} \exp(i \omega_{mn} t)
+ = \sum_{n} c_n(t) \matrixel{m}{\hat{H}_1}{n} e^{i \omega_{mn} t}
\end{aligned}$$
Where $$\omega_{mn} \equiv (E_m \!-\! E_n) / \hbar$$
@@ -31,10 +31,10 @@ in which case the above equation can be expanded to the following:
$$\begin{aligned}
\dv{c_a}{t}
- &= - \frac{i}{\hbar} \matrixel{a}{\hat{H}_1}{b} \exp(- i \omega_0 t) \: c_b - \frac{i}{\hbar} \matrixel{a}{\hat{H}_1}{a} \: c_a
+ &= - \frac{i}{\hbar} \matrixel{a}{\hat{H}_1}{b} e^{-i \omega_0 t} \: c_b - \frac{i}{\hbar} \matrixel{a}{\hat{H}_1}{a} c_a
\\
\dv{c_b}{t}
- &= - \frac{i}{\hbar} \matrixel{b}{\hat{H}_1}{a} \exp(i \omega_0 t) \: c_a - \frac{i}{\hbar} \matrixel{b}{\hat{H}_1}{b} \: c_b
+ &= - \frac{i}{\hbar} \matrixel{b}{\hat{H}_1}{a} e^{i \omega_0 t} \: c_a - \frac{i}{\hbar} \matrixel{b}{\hat{H}_1}{b} c_b
\end{aligned}$$
Where $$\omega_0 \equiv \omega_{ba}$$ is positive.
@@ -44,10 +44,10 @@ states that the diagonal matrix elements vanish, leaving:
$$\begin{aligned}
\dv{c_a}{t}
- &= - \frac{i}{\hbar} \matrixel{a}{\hat{H}_1}{b} \exp(- i \omega_0 t) \: c_b
+ &= - \frac{i}{\hbar} \matrixel{a}{\hat{H}_1}{b} e^{-i \omega_0 t} \: c_b
\\
\dv{c_b}{t}
- &= - \frac{i}{\hbar} \matrixel{b}{\hat{H}_1}{a} \exp(i \omega_0 t) \: c_a
+ &= - \frac{i}{\hbar} \matrixel{b}{\hat{H}_1}{a} e^{i \omega_0 t} \: c_a
\end{aligned}$$
We now choose $$\hat{H}_1$$ to be as follows,
@@ -56,7 +56,7 @@ sinusoidally oscillating with a spatially odd $$V(\vec{r})$$:
$$\begin{aligned}
\hat{H}_1(t)
= V \cos(\omega t)
- = \frac{V}{2} \Big( \exp(i \omega t) + \exp(-i \omega t) \Big)
+ = \frac{V}{2} \Big( e^{i \omega t} + e^{-i \omega t} \Big)
\end{aligned}$$
We insert this into the equations for $$c_a$$ and $$c_b$$,
@@ -64,16 +64,16 @@ and define $$V_{ab} \equiv \matrixel{a}{V}{b}$$, leading us to:
$$\begin{aligned}
\dv{c_a}{t}
- &= - i \frac{V_{ab}}{2 \hbar} \Big( \exp\!\big(i (\omega \!-\! \omega_0) t\big) + \exp\!\big(\!-\! i (\omega \!+\! \omega_0) t\big) \Big) \: c_b
+ &= - i \frac{V_{ab}}{2 \hbar} \Big( e^{i (\omega - \omega_0) t} + e^{-i (\omega + \omega_0) t} \Big) \: c_b
\\
\dv{c_b}{t}
- &= - i \frac{V_{ab}}{2 \hbar} \Big( \exp\!\big(i (\omega \!+\! \omega_0) t\big) + \exp\!\big(\!-\! i (\omega \!-\! \omega_0) t\big) \Big) \: c_a
+ &= - i \frac{V_{ab}}{2 \hbar} \Big( e^{i (\omega + \omega_0) t} + e^{-i (\omega - \omega_0) t} \Big) \: c_a
\end{aligned}$$
Here, we make the
[rotating wave approximation](/know/concept/rotating-wave-approximation/):
assuming we are close to resonance $$\omega \approx \omega_0$$,
-we argue that $$\exp(i (\omega \!+\! \omega_0) t)$$
+we argue that $$e^{i (\omega + \omega_0) t}$$
oscillates so fast that its effect is negligible
when the system is observed over a reasonable time interval.
Dropping those terms leaves us with:
@@ -82,10 +82,10 @@ $$\begin{aligned}
\boxed{
\begin{aligned}
\dv{c_a}{t}
- &= - i \frac{V_{ab}}{2 \hbar} \exp\!\big(i (\omega \!-\! \omega_0) t \big) \: c_b
+ &= - i \frac{V_{ab}}{2 \hbar} \: e^{i (\omega - \omega_0) t} \: c_b
\\
\dv{c_b}{t}
- &= - i \frac{V_{ba}}{2 \hbar} \exp\!\big(\!-\! i (\omega \!-\! \omega_0) t \big) \: c_a
+ &= - i \frac{V_{ba}}{2 \hbar} \: e^{-i (\omega - \omega_0) t} \: c_a
\end{aligned}
}
\end{aligned}$$
@@ -96,13 +96,12 @@ and then substitute $$\idv{c_b}{t}$$ for the second equation:
$$\begin{aligned}
\dvn{2}{c_a}{t}
- &= - i \frac{V_{ab}}{2 \hbar} \bigg( i (\omega - \omega_0) \: c_b + \dv{c_b}{t} \bigg) \exp\!\big(i (\omega \!-\! \omega_0) t \big)
+ &= - i \frac{V_{ab}}{2 \hbar} \bigg( i (\omega - \omega_0) \: c_b + \dv{c_b}{t} \bigg) e^{i (\omega - \omega_0) t}
\\
- &= - i \frac{V_{ab}}{2 \hbar} \bigg( i (\omega - \omega_0) \: c_b
- - i \frac{V_{ba}}{2 \hbar} \exp\!\big(\!-\! i (\omega \!-\! \omega_0) t \big) \: c_a \bigg)
- \exp\!\big(i (\omega \!-\! \omega_0) t \big)
+ &= - i \frac{V_{ab}}{2 \hbar} \bigg( i (\omega - \omega_0) \: c_b
+ - i \frac{V_{ba}}{2 \hbar} \: e^{-i (\omega - \omega_0) t} \: c_a \bigg) e^{i (\omega - \omega_0) t}
\\
- &= \frac{V_{ab}}{2 \hbar} (\omega - \omega_0) \exp\!\big(i (\omega \!-\! \omega_0) t \big) \: c_b - \frac{|V_{ab}|^2}{(2 \hbar)^2} c_a
+ &= \frac{V_{ab}}{2 \hbar} (\omega - \omega_0) \: e^{i (\omega - \omega_0) t} \: c_b - \frac{|V_{ab}|^2}{(2 \hbar)^2} \: c_a
\end{aligned}$$
In the first term, we recognize $$\idv{c_a}{t}$$,
@@ -113,7 +112,7 @@ $$\begin{aligned}
= \dvn{2}{c_a}{t} - i (\omega - \omega_0) \dv{c_a}{t} + \frac{|V_{ab}|^2}{(2 \hbar)^2} \: c_a
\end{aligned}$$
-To solve this, we make the ansatz $$c_a(t) = \exp(\lambda t)$$,
+To solve this, we make the ansatz $$c_a(t) = e^{\lambda t}$$,
which, upon insertion, gives us:
$$\begin{aligned}
@@ -148,7 +147,7 @@ to be determined from initial conditions (and normalization):
$$\begin{aligned}
\boxed{
c_a(t)
- = \Big( A \sin(\tilde{\Omega} t / 2) + B \cos(\tilde{\Omega} t / 2) \Big) \exp\!\big(i (\omega \!-\! \omega_0) t / 2 \big)
+ = \Big( A \sin(\tilde{\Omega} t / 2) + B \cos(\tilde{\Omega} t / 2) \Big) e^{i (\omega - \omega_0) t / 2}
}
\end{aligned}$$
@@ -173,7 +172,7 @@ Note that the period was halved by squaring.
This periodic "flopping" of the particle between $$\Ket{a}$$ and $$\Ket{b}$$
is known as **Rabi oscillation**, **Rabi flopping** or the **Rabi cycle**.
This is a more accurate treatment
-of the flopping found from first-order perturbation theory.
+of the flopping found from first-order perturbation theory in textbooks.
The name **generalized Rabi frequency** suggests
that there is a non-general version.
@@ -185,6 +184,8 @@ $$\begin{aligned}
\equiv \frac{V_{ba}}{\hbar}
\end{aligned}$$
+Some authors use $$|V_{ba}|$$ instead,
+but not doing that lets us use $$\Omega$$ as a nice abbreviation.
As an example, Rabi oscillation arises
in the [electric dipole approximation](/know/concept/electric-dipole-approximation/),
where $$\hat{H}_1$$ is:
diff --git a/source/know/concept/self-steepening/index.md b/source/know/concept/self-steepening/index.md
index 9666167..e06b0b5 100644
--- a/source/know/concept/self-steepening/index.md
+++ b/source/know/concept/self-steepening/index.md
@@ -48,15 +48,16 @@ $$\begin{aligned}
\end{aligned}$$
The phase $$\phi$$ is not so interesting, so we focus on the latter equation for $$P$$.
-As it turns out, it has a general solution of the form below, which shows that
-more intense parts of the pulse will tend to lag behind compared to the rest:
+As it turns out, it has a general solution of the form below (you can verify this yourself),
+which shows that more intense parts of the pulse
+will lag behind compared to the rest:
$$\begin{aligned}
P(z,t) = f(t - 3 \varepsilon z P)
\end{aligned}$$
Where $$f$$ is the initial power profile: $$f(t) = P(0,t)$$.
-The derivatives $$P_t$$ and $$P_z$$ are then given by:
+The derivatives $$P_t$$ and $$P_z$$ are given by:
$$\begin{aligned}
P_t
@@ -76,12 +77,15 @@ These derivatives both go to infinity when their denominator is zero,
which, since $$\varepsilon$$ is positive, will happen earliest where $$f'$$
has its most negative value, called $$f_\mathrm{min}'$$,
which is located on the trailing edge of the pulse.
-At the propagation distance where this occurs, $$L_\mathrm{shock}$$,
+At the propagation distance $$z$$ where this occurs, $$L_\mathrm{shock}$$,
the pulse will "tip over", creating a discontinuous shock:
$$\begin{aligned}
+ 0
+ = 1 + 3 \varepsilon z f_\mathrm{min}'
+ \qquad \implies \qquad
\boxed{
- L_\mathrm{shock} = -\frac{1}{3 \varepsilon f_\mathrm{min}'}
+ L_\mathrm{shock} \equiv -\frac{1}{3 \varepsilon f_\mathrm{min}'}
}
\end{aligned}$$
@@ -135,5 +139,6 @@ $$\begin{aligned}
\end{aligned}$$
+
## References
1. B.R. Suydam, [Self-steepening of optical pulses](https://doi.org/10.1007/0-387-25097-2_6), 2006, Springer.
diff --git a/source/know/concept/shors-algorithm/index.md b/source/know/concept/shors-algorithm/index.md
index a47151a..5ae5077 100644
--- a/source/know/concept/shors-algorithm/index.md
+++ b/source/know/concept/shors-algorithm/index.md
@@ -33,9 +33,10 @@ Shor's algorithm can solve practically every such problem.
## Integer factorization
-Originally, Shor's algorithm was designed to factorize an integer $$N$$,
-in which case the goal is to find the period $$s$$ of
-the modular exponentiation function $$f$$ (for reasons explained later):
+Originally, Shor's algorithm was designed to factorize an integer $$N$$.
+For reasons explained later,
+this means our goal is to find the period $$s$$ of
+the modular exponentiation function $$f$$:
$$\begin{aligned}
f(x)
@@ -79,7 +80,7 @@ $$\begin{aligned}
\frac{1}{\sqrt{Q}} \sum_{x = 0}^{Q - 1} \Ket{x} \Ket{f(x)}
\end{aligned}$$
-Then we measure $$f(x)$$, causing it collapse as follows,
+Then we measure $$f(x)$$, causing it collapse as follows
for an unknown arbitrary value of $$x_0$$:
$$\begin{aligned}
diff --git a/source/know/concept/thermodynamic-potential/index.md b/source/know/concept/thermodynamic-potential/index.md
index ece1551..b3bedda 100644
--- a/source/know/concept/thermodynamic-potential/index.md
+++ b/source/know/concept/thermodynamic-potential/index.md
@@ -15,8 +15,8 @@ Such functions are either energies (hence *potential*) or entropies.
Which potential (of many) decides the equilibrium states for a given system?
That depends which variables are assumed to already be in automatic equilibrium.
Such variables are known as the **natural variables** of that potential.
-For example, if a system can freely exchange heat with its surroundings,
-and is consequently assumed to be at the same temperature $$T = T_{\mathrm{sur}}$$,
+For example, if a system can freely exchange heat with its environment,
+and is consequently assumed to be at the same temperature $$T = T_{\mathrm{env}}$$,
then $$T$$ must be a natural variable.
The link from natural variables to potentials
@@ -32,6 +32,7 @@ Mathematically, the potentials are related to each other
by [Legendre transformation](/know/concept/legendre-transform/).
+
## Internal energy
The **internal energy** $$U$$ represents
@@ -76,6 +77,7 @@ to help keep track of which function depends on which variables.
They are meaningless; these are normal partial derivatives.
+
## Enthalpy
The **enthalpy** $$H$$ of a system, in units of energy,
@@ -115,6 +117,7 @@ $$\begin{aligned}
\end{aligned}$$
+
## Helmholtz free energy
The **Helmholtz free energy** $$F$$ represents
@@ -154,6 +157,7 @@ $$\begin{aligned}
\end{aligned}$$
+
## Gibbs free energy
The **Gibbs free energy** $$G$$ represents
@@ -192,6 +196,7 @@ $$\begin{aligned}
\end{aligned}$$
+
## Landau potential
The **Landau potential** or **grand potential** $$\Omega$$, in units of energy,
@@ -230,6 +235,7 @@ $$\begin{aligned}
\end{aligned}$$
+
## Entropy
The **entropy** $$S$$, in units of energy over temperature,
diff --git a/source/know/concept/toffoli-gate/index.md b/source/know/concept/toffoli-gate/index.md
index b9d9528..9a99e69 100644
--- a/source/know/concept/toffoli-gate/index.md
+++ b/source/know/concept/toffoli-gate/index.md
@@ -66,10 +66,10 @@ it swaps the last two coefficients:
$$\begin{aligned}
\mathrm{CCNOT} \Ket{\psi}
- &= \mathrm{CCNOT} \big( c_{000} \Ket{000} + c_{001} \Ket{001} + c_{010} \Ket{010} + c_{011} \Ket{011} \\
+ &= \mathrm{CCNOT} \big( c_{000} \Ket{000} + c_{001} \Ket{001} + c_{010} \Ket{010} + c_{011} \Ket{011} + \\
&\qquad\qquad\quad\:\; c_{100} \Ket{100} + c_{101} \Ket{101} + c_{110} \Ket{110} + c_{111} \Ket{111} \big)
\\
- &= c_{000} \Ket{000} + c_{001} \Ket{001} + c_{010} \Ket{010} + c_{011} \Ket{011} \\
+ &= c_{000} \Ket{000} + c_{001} \Ket{001} + c_{010} \Ket{010} + c_{011} \Ket{011} + \\
&\quad\,\, c_{100} \Ket{100} + c_{101} \Ket{101} + c_{111} \Ket{110} + c_{110} \Ket{111}
\end{aligned}$$