summaryrefslogtreecommitdiff
path: root/source/know/concept
diff options
context:
space:
mode:
Diffstat (limited to 'source/know/concept')
-rw-r--r--source/know/concept/alfven-waves/index.md6
-rw-r--r--source/know/concept/binomial-distribution/index.md105
-rw-r--r--source/know/concept/central-limit-theorem/index.md90
-rw-r--r--source/know/concept/conditional-expectation/index.md3
-rw-r--r--source/know/concept/dispersive-broadening/index.md19
-rw-r--r--source/know/concept/holomorphic-function/index.md46
-rw-r--r--source/know/concept/ion-sound-wave/index.md34
-rw-r--r--source/know/concept/lagrange-multiplier/index.md18
-rw-r--r--source/know/concept/langmuir-waves/index.md12
-rw-r--r--source/know/concept/maxwell-boltzmann-distribution/index.md85
-rw-r--r--source/know/concept/modulational-instability/index.md69
-rw-r--r--source/know/concept/optical-wave-breaking/index.md51
-rw-r--r--source/know/concept/random-variable/index.md17
-rw-r--r--source/know/concept/residue-theorem/index.md11
-rw-r--r--source/know/concept/self-phase-modulation/index.md15
-rw-r--r--source/know/concept/self-steepening/index.md29
-rw-r--r--source/know/concept/sigma-algebra/index.md4
-rw-r--r--source/know/concept/step-index-fiber/index.md157
-rw-r--r--source/know/concept/step-index-fiber/transcendental-full.pngbin122545 -> 109224 bytes
-rw-r--r--source/know/concept/step-index-fiber/transcendental-half.avifbin21001 -> 19600 bytes
-rw-r--r--source/know/concept/step-index-fiber/transcendental-half.jpgbin95385 -> 84184 bytes
-rw-r--r--source/know/concept/step-index-fiber/transcendental-half.pngbin88438 -> 90521 bytes
-rw-r--r--source/know/concept/step-index-fiber/transcendental-half.webpbin48626 -> 43374 bytes
23 files changed, 471 insertions, 300 deletions
diff --git a/source/know/concept/alfven-waves/index.md b/source/know/concept/alfven-waves/index.md
index 31576f3..0396c7a 100644
--- a/source/know/concept/alfven-waves/index.md
+++ b/source/know/concept/alfven-waves/index.md
@@ -61,12 +61,12 @@ $$\begin{aligned}
= \frac{1}{\mu_0} \nabla \cross \vb{B}_1
\end{aligned}$$
-Substituting this into the momentum equation,
+Substituting this into the above momentum equation,
and differentiating with respect to $$t$$:
$$\begin{aligned}
\rho \pdvn{2}{\vb{u}_1}{t}
- = \frac{1}{\mu_0} \bigg( \Big( \nabla \cross \pdv{}{\vb{B}1}{t} \Big) \cross \vb{B}_0 \bigg)
+ = \frac{1}{\mu_0} \bigg( \Big( \nabla \cross \pdv{\vb{B}_1}{t} \Big) \cross \vb{B}_0 \bigg)
\end{aligned}$$
For which we can use Faraday's law to rewrite $$\ipdv{\vb{B}_1}{t}$$,
@@ -78,7 +78,7 @@ $$\begin{aligned}
= \nabla \cross (\vb{u}_1 \cross \vb{B}_0)
\end{aligned}$$
-Inserting this into the momentum equation for $$\vb{u}_1$$
+Inserting this back into the momentum equation for $$\vb{u}_1$$
thus yields its final form:
$$\begin{aligned}
diff --git a/source/know/concept/binomial-distribution/index.md b/source/know/concept/binomial-distribution/index.md
index dc75221..9bb32d3 100644
--- a/source/know/concept/binomial-distribution/index.md
+++ b/source/know/concept/binomial-distribution/index.md
@@ -46,19 +46,25 @@ $$\begin{aligned}
{% include proof/start.html id="proof-mean" -%}
-The trick is to treat $$p$$ and $$q$$ as independent until the last moment:
+The trick is to treat $$p$$ and $$q$$ as independent and introduce a derivative:
+
+$$\begin{aligned}
+ \mu
+ &= \sum_{n = 0}^N n P_N(n)
+ = \sum_{n = 0}^N n \binom{N}{n} p^n q^{N - n}
+ = \sum_{n = 0}^N \binom{N}{n} \bigg( p \pdv{(p^n)}{p} \bigg) q^{N - n}
+\end{aligned}$$
+
+Then, using the fact that the binomial coefficients appear when writing out $$(p + q)^N$$:
$$\begin{aligned}
\mu
- &= \sum_{n = 0}^N n \binom{N}{n} p^n q^{N - n}
- = \sum_{n = 0}^N \binom{N}{n} \Big( p \pdv{(p^n)}{p} \Big) q^{N - n}
- \\
&= p \pdv{}{p}\sum_{n = 0}^N \binom{N}{n} p^n q^{N - n}
= p \pdv{}{p}(p + q)^N
= N p (p + q)^{N - 1}
\end{aligned}$$
-Inserting $$q = 1 - p$$ then gives the desired result.
+Finally, inserting $$q = 1 - p$$ gives the desired result.
{% include proof/end.html id="proof-mean" %}
@@ -73,18 +79,21 @@ $$\begin{aligned}
{% include proof/start.html id="proof-var" -%}
+We reuse the previous trick to find $$\overline{n^2}$$
(the mean squared number of successes):
$$\begin{aligned}
\overline{n^2}
&= \sum_{n = 0}^N n^2 \binom{N}{n} p^n q^{N - n}
- = \sum_{n = 0}^N n \binom{N}{n} \Big( p \pdv{}{p}\Big)^2 p^n q^{N - n}
+ = \sum_{n = 0}^N n \binom{N}{n} \bigg( p \pdv{}{p} \bigg) p^n q^{N - n}
+ \\
+ &= \sum_{n = 0}^N \binom{N}{n} \bigg( p \pdv{}{p} \bigg)^2 p^n q^{N - n}
+ = \bigg( p \pdv{}{p} \bigg)^2 \sum_{n = 0}^N \binom{N}{n} p^n q^{N - n}
\\
- &= \Big( p \pdv{}{p}\Big)^2 \sum_{n = 0}^N \binom{N}{n} p^n q^{N - n}
- = \Big( p \pdv{}{p}\Big)^2 (p + q)^N
+ &= \bigg( p \pdv{}{p} \bigg)^2 (p + q)^N
+ = N p \pdv{}{p}p (p + q)^{N - 1}
\\
- &= N p \pdv{}{p}p (p + q)^{N - 1}
- = N p \big( (p + q)^{N - 1} + (N - 1) p (p + q)^{N - 2} \big)
+ &= N p \big( (p + q)^{N - 1} + (N - 1) p (p + q)^{N - 2} \big)
\\
&= N p + N^2 p^2 - N p^2
\end{aligned}$$
@@ -108,7 +117,7 @@ a fact that is sometimes called the **de Moivre-Laplace theorem**:
$$\begin{aligned}
\boxed{
- \lim_{N \to \infty} P_N(n) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\!\Big(\!-\!\frac{(n - \mu)^2}{2 \sigma^2} \Big)
+ \lim_{N \to \infty} P_N(n) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\!\bigg(\!-\!\frac{(n - \mu)^2}{2 \sigma^2} \bigg)
}
\end{aligned}$$
@@ -121,73 +130,94 @@ $$\begin{aligned}
\ln\!\big(P_N(n)\big)
&= \sum_{m = 0}^\infty \frac{(n - \mu)^m}{m!} D_m(\mu)
\quad \mathrm{where} \quad
- D_m(n) = \dvn{m}{\ln\!\big(P_N(n)\big)}{n}
+ D_m(n)
+ \equiv \dvn{m}{\ln\!\big(P_N(n)\big)}{n}
\end{aligned}$$
-We use Stirling's approximation to calculate the factorials in $$D_m$$:
+For future convenience while calculating the $$D_m$$, we write out $$\ln(P_N)$$ now:
$$\begin{aligned}
\ln\!\big(P_N(n)\big)
- &= \ln(N!) - \ln(n!) - \ln\!\big((N - n)!\big) + n \ln(p) + (N - n) \ln(q)
- \\
- &\approx \ln(N!) - n \big( \ln(n)\!-\!\ln(p)\!-\!1 \big) - (N\!-\!n) \big( \ln(N\!-\!n)\!-\!\ln(q)\!-\!1 \big)
+ &= \ln(N!) - \ln(n!) - \ln\!\big((N \!-\! n)!\big) + n \ln(p) + (N \!-\! n) \ln(q)
\end{aligned}$$
-For $$D_0(\mu)$$, we need to use a stronger version of Stirling's approximation
-to get a non-zero result. We take advantage of $$N - N p = N q$$:
+For $$D_0(\mu)$$ specifically,
+we need to use a strong version of *Stirling's approximation*
+to arrive at a nonzero result in the end.
+We know that $$N - N p = N q$$:
$$\begin{aligned}
D_0(\mu)
+ &= \ln\!\big(P_N(n)\big) \big|_{n = \mu}
+ \\
+ &= \ln(N!) - \ln(\mu!) - \ln\!\big((N \!-\! \mu)!\big) + \mu \ln(p) + (N \!-\! \mu) \ln(q)
+ \\
&= \ln(N!) - \ln\!\big((N p)!\big) - \ln\!\big((N q)!\big) + N p \ln(p) + N q \ln(q)
\\
- &= \Big( N \ln(N) - N + \frac{1}{2} \ln(2\pi N) \Big)
+ &\approx \Big( N \ln(N) - N + \frac{1}{2} \ln(2\pi N) \Big)
- \Big( N p \ln(N p) - N p + \frac{1}{2} \ln(2\pi N p) \Big) \\
&\qquad - \Big( N q \ln(N q) - N q + \frac{1}{2} \ln(2\pi N q) \Big)
+ N p \ln(p) + N q \ln(q)
\\
- &= N \ln(N) - N (p + q) \ln(N) + N (p + q) - N - \frac{1}{2} \ln(2\pi N p q)
+ &= N \ln(N) - N (p \!+\! q) \ln(N) + N (p \!+\! q) - N - \frac{1}{2} \ln(2\pi N p q)
\\
&= - \frac{1}{2} \ln(2\pi N p q)
- = \ln\!\Big( \frac{1}{\sqrt{2\pi \sigma^2}} \Big)
+ = \ln\!\bigg( \frac{1}{\sqrt{2\pi \sigma^2}} \bigg)
\end{aligned}$$
-Next, we expect that $$D_1(\mu) = 0$$, because $$\mu$$ is the maximum.
-This is indeed the case:
+Next, for $$D_m(\mu)$$ with $$m \ge 1$$,
+we can use a weaker version of Stirling's approximation:
+
+$$\begin{aligned}
+ \ln(P_N)
+ &\approx \ln(N!) - n \big( \ln(n) \!-\! 1 \big) - (N \!-\! n) \big( \ln(N \!-\! n) \!-\! 1 \big) + n \ln(p) + (N \!-\! n) \ln(q)
+ \\
+ &\approx \ln(N!) - n \big( \ln(n) - \ln(p) - 1 \big) - (N\!-\!n) \big( \ln(N\!-\!n) - \ln(q) - 1 \big)
+\end{aligned}$$
+
+We expect that $$D_1(\mu) = 0$$, because $$P_N$$ is maximized at $$\mu$$.
+Indeed it is:
$$\begin{aligned}
D_1(n)
- &= - \big( \ln(n)\!-\!\ln(p)\!-\!1 \big) + \big( \ln(N\!-\!n)\!-\!\ln(q)\!-\!1 \big) - 1 + 1
+ &= \dv{}{n} \ln\!\big((P_N(n)\big)
\\
- &= - \ln(n) + \ln(N - n) + \ln(p) - \ln(q)
+ &= - \big( \ln(n) - \ln(p) - 1 \big) + \big( \ln(N\!-\!n) - \ln(q) - 1 \big) - \frac{n}{n} + \frac{N \!-\! n}{N \!-\! n}
+ \\
+ &= - \ln(n) + \ln(N \!-\! n) + \ln(p) - \ln(q)
\\
D_1(\mu)
- &= \ln(N q) - \ln(N p) + \ln(p) - \ln(q)
- = \ln(N p q) - \ln(N p q)
- = 0
+ &= - \ln(\mu) + \ln(N \!-\! \mu) + \ln(p) - \ln(q)
+ \\
+ &= - \ln(N p q) + \ln(N p q)
+ \\
+ &= 0
\end{aligned}$$
-For the same reason, we expect that $$D_2(\mu)$$ is negative.
+For the same reason, we expect $$D_2(\mu)$$ to be negative.
We find the following expression:
$$\begin{aligned}
D_2(n)
- &= - \frac{1}{n} - \frac{1}{N - n}
- \qquad
+ &= \dvn{2}{}{n} \ln\!\big((P_N(n)\big)
+ = \dv{}{n} D_1(n)
+ = - \frac{1}{n} - \frac{1}{N - n}
+ \\
D_2(\mu)
- = - \frac{1}{Np} - \frac{1}{Nq}
+ &= - \frac{1}{Np} - \frac{1}{Nq}
= - \frac{p + q}{N p q}
= - \frac{1}{\sigma^2}
\end{aligned}$$
-The higher-order derivatives tend to zero for $$N \to \infty$$, so we discard them:
+The higher-order derivatives vanish much faster as $$N \to \infty$$, so we discard them:
$$\begin{aligned}
D_3(n)
= \frac{1}{n^2} - \frac{1}{(N - n)^2}
- \qquad
+ \qquad \quad
D_4(n)
= - \frac{2}{n^3} - \frac{2}{(N - n)^3}
- \qquad
+ \qquad \quad
\cdots
\end{aligned}$$
@@ -197,13 +227,14 @@ the Taylor series approximately becomes:
$$\begin{aligned}
\ln\!\big(P_N(n)\big)
\approx D_0(\mu) + \frac{(n - \mu)^2}{2} D_2(\mu)
- = \ln\!\Big( \frac{1}{\sqrt{2\pi \sigma^2}} \Big) - \frac{(n - \mu)^2}{2 \sigma^2}
+ = \ln\!\bigg( \frac{1}{\sqrt{2\pi \sigma^2}} \bigg) - \frac{(n - \mu)^2}{2 \sigma^2}
\end{aligned}$$
-Taking $$\exp$$ of this expression then yields a normalized Gaussian distribution.
+Raising $$e$$ to this expression then yields a normalized Gaussian distribution.
{% include proof/end.html id="proof-normal" %}
+
## References
1. H. Gould, J. Tobochnik,
*Statistical and thermal physics*, 2nd edition,
diff --git a/source/know/concept/central-limit-theorem/index.md b/source/know/concept/central-limit-theorem/index.md
index 595cee7..e933ee7 100644
--- a/source/know/concept/central-limit-theorem/index.md
+++ b/source/know/concept/central-limit-theorem/index.md
@@ -18,24 +18,24 @@ the resulting means $$\mu_m$$ are normally distributed
across the $$M$$ samples if $$N$$ is sufficiently large.
More formally, for $$N$$ independent variables $$x_n$$ with probability distributions $$p(x_n)$$,
-the central limit theorem states the following,
-where we define the sum $$S$$:
+we define the following totals of all variables, means and variances:
$$\begin{aligned}
- S = \sum_{n = 1}^N x_n
- \qquad
- \mu_S = \sum_{n = 1}^N \mu_n
- \qquad
- \sigma_S^2 = \sum_{n = 1}^N \sigma_n^2
+ t \equiv \sum_{n = 1}^N x_n
+ \qquad \qquad
+ \mu_t \equiv \sum_{n = 1}^N \mu_n
+ \qquad \qquad
+ \sigma_t^2 \equiv \sum_{n = 1}^N \sigma_n^2
\end{aligned}$$
-And crucially, it states that the probability distribution $$p_N(S)$$ of $$S$$ for $$N$$ variables
+The central limit theorem then states that
+the probability distribution $$p_N(t)$$ of $$t$$ for $$N$$ variables
will become a normal distribution when $$N$$ goes to infinity:
$$\begin{aligned}
\boxed{
- \lim_{N \to \infty} \!\big(p_N(S)\big)
- = \frac{1}{\sigma_S \sqrt{2 \pi}} \exp\!\Big( -\frac{(\mu_S - S)^2}{2 \sigma_S^2} \Big)
+ \lim_{N \to \infty} \!\big(p_N(t)\big)
+ = \frac{1}{\sigma_t \sqrt{2 \pi}} \exp\!\bigg( -\frac{(t - \mu_t)^2}{2 \sigma_t^2} \bigg)
}
\end{aligned}$$
@@ -45,7 +45,8 @@ Given a probability density $$p(x)$$, its [Fourier transform](/know/concept/four
is called the **characteristic function** $$\phi(k)$$:
$$\begin{aligned}
- \phi(k) = \int_{-\infty}^\infty p(x) \exp(i k x) \dd{x}
+ \phi(k)
+ \equiv \int_{-\infty}^\infty p(x) \exp(i k x) \dd{x}
\end{aligned}$$
Note that $$\phi(k)$$ can be interpreted as the average of $$\exp(i k x)$$.
@@ -54,17 +55,19 @@ where an overline denotes the mean:
$$\begin{aligned}
\phi(k)
- = \sum_{n = 0}^\infty \frac{k^n}{n!} \: \phi^{(n)}(0)
- \qquad
+ = \sum_{n = 0}^\infty \frac{k^n}{n!} \bigg( \dvn{n}{\phi}{k} \Big|_{k = 0} \bigg)
+ \qquad \qquad
\phi(k)
- = \overline{\exp(i k x)} = \sum_{n = 0}^\infty \frac{(ik)^n}{n!} \overline{x^n}
+ = \overline{\exp(i k x)}
+ = \sum_{n = 0}^\infty \frac{(ik)^n}{n!} \overline{x^n}
\end{aligned}$$
By comparing the coefficients of these two power series,
we get a useful relation:
$$\begin{aligned}
- \phi^{(n)}(0) = i^n \: \overline{x^n}
+ \dvn{n}{\phi}{k} \Big|_{k = 0}
+ = i^n \: \overline{x^n}
\end{aligned}$$
Next, the **cumulants** $$C^{(n)}$$ are defined from the Taylor expansion of $$\ln\!\big(\phi(k)\big)$$:
@@ -73,73 +76,82 @@ $$\begin{aligned}
\ln\!\big( \phi(k) \big)
= \sum_{n = 1}^\infty \frac{(ik)^n}{n!} C^{(n)}
\quad \mathrm{where} \quad
- C^{(n)} = \frac{1}{i^n} \: \dvn{n}{}{k} \Big(\ln\!\big(\phi(k)\big)\Big) \Big|_{k = 0}
+ C^{(n)}
+ \equiv \frac{1}{i^n} \: \dvn{n}{}{k} \ln\!\big(\phi(k)\big) \Big|_{k = 0}
\end{aligned}$$
The first two cumulants $$C^{(1)}$$ and $$C^{(2)}$$ are of particular interest,
-since they turn out to be the mean and the variance respectively,
-using our earlier relation:
+since they turn out to be the mean and the variance respectively.
+Using our earlier relation:
$$\begin{aligned}
C^{(1)}
- &= - i \dv{}{k} \Big(\ln\!\big(\phi(k)\big)\Big) \Big|_{k = 0}
+ &= - i \dv{}{k} \ln\!\big(\phi(k)\big) \Big|_{k = 0}
= - i \frac{\phi'(0)}{\exp(0)}
= \overline{x}
\\
C^{(2)}
- &= - \dvn{2}{}{k} \Big(\ln\!\big(\phi(k)\big)\Big) \Big|_{k = 0}
+ &= - \dvn{2}{}{k} \ln\!\big(\phi(k)\big) \Big|_{k = 0}
= \frac{\big(\phi'(0)\big)^2}{\exp(0)^2} - \frac{\phi''(0)}{\exp(0)}
= - \overline{x}^2 + \overline{x^2} = \sigma^2
\end{aligned}$$
-Let us now define $$S$$ as the sum of $$N$$ independent variables $$x_n$$, in other words:
+Now that we have introduced these tools,
+we define $$t$$ as the sum
+of $$N$$ independent variables $$x_n$$, in other words:
$$\begin{aligned}
- S = \sum_{n = 1}^N x_n = x_1 + x_2 + ... + x_N
+ t
+ \equiv \sum_{n = 1}^N x_n = x_1 + x_2 + ... + x_N
\end{aligned}$$
-The probability density of $$S$$ is then as follows, where $$p(x_n)$$ are
+The probability density of $$t$$ is then as follows, where $$p(x_n)$$ are
the densities of all the individual variables and $$\delta$$ is
the [Dirac delta function](/know/concept/dirac-delta-function/):
$$\begin{aligned}
- p(S)
- &= \int\cdots\int_{-\infty}^\infty \Big( \prod_{n = 1}^N p(x_n) \Big) \: \delta\Big( S - \sum_{n = 1}^N x_n \Big) \dd{x_1} \cdots \dd{x_N}
+ p(t)
+ &= \int\cdots\int_{-\infty}^\infty \Big( \prod_{n = 1}^N p(x_n) \Big) \: \delta\Big( t - \sum_{n = 1}^N x_n \Big) \dd{x_1} \cdots \dd{x_N}
\\
- &= \Big( p_1 * \big( p_2 * ( ... * (p_N * \delta))\big)\Big)(S)
+ &= \Big( p_1 * \big( p_2 * ( ... * (p_N * \delta))\big)\Big)(t)
\end{aligned}$$
In other words, the integrals pick out all combinations of $$x_n$$ which
-add up to the desired $$S$$-value, and multiply the probabilities
+add up to the desired $$t$$-value, and multiply the probabilities
$$p(x_1) p(x_2) \cdots p(x_N)$$ of each such case. This is a convolution,
so the [convolution theorem](/know/concept/convolution-theorem/)
states that it is a product in the Fourier domain:
$$\begin{aligned}
- \phi_S(k) = \prod_{n = 1}^N \phi_n(k)
+ \phi_t(k)
+ = \prod_{n = 1}^N \phi_n(k)
\end{aligned}$$
By taking the logarithm of both sides, the product becomes a sum,
which we further expand:
$$\begin{aligned}
- \ln\!\big(\phi_S(k)\big)
+ \ln\!\big(\phi_t(k)\big)
= \sum_{n = 1}^N \ln\!\big(\phi_n(k)\big)
= \sum_{n = 1}^N \sum_{m = 1}^{\infty} \frac{(ik)^m}{m!} C_n^{(m)}
\end{aligned}$$
-Consequently, the cumulants $$C^{(m)}$$ stack additively for the sum $$S$$
+Consequently, the cumulants $$C^{(m)}$$ stack additively for the sum $$t$$
of independent variables $$x_m$$, and therefore
the means $$C^{(1)}$$ and variances $$C^{(2)}$$ do too:
$$\begin{aligned}
- C_S^{(m)} = \sum_{n = 1}^N C_n^{(m)} = C_1^{(m)} + C_2^{(m)} + ... + C_N^{(m)}
+ C_t^{(m)}
+ = \sum_{n = 1}^N C_n^{(m)}
+ = C_1^{(m)} + C_2^{(m)} + ... + C_N^{(m)}
\end{aligned}$$
We now introduce the scaled sum $$z$$ as the new combined variable:
$$\begin{aligned}
- z = \frac{S}{\sqrt{N}} = \frac{1}{\sqrt{N}} (x_1 + x_2 + ... + x_N)
+ z
+ \equiv \frac{t}{\sqrt{N}}
+ = \frac{1}{\sqrt{N}} (x_1 + x_2 + ... + x_N)
\end{aligned}$$
Its characteristic function $$\phi_z(k)$$ is then as follows,
@@ -176,28 +188,30 @@ For sufficiently large $$N$$, we can therefore approximate it using just the fir
$$\begin{aligned}
\ln\!\big( \phi_z(k) \big)
&\approx i k C^{(1)} - \frac{k^2}{2} C^{(2)}
- = i k \overline{z} - \frac{k^2}{2} \sigma_z^2
+ = i k \mu_z - \frac{k^2}{2} \sigma_z^2
\\
+ \implies \quad
\phi_z(k)
- &\approx \exp(i k \overline{z}) \exp(- k^2 \sigma_z^2 / 2)
+ &\approx \exp(i k \mu_z) \exp(- k^2 \sigma_z^2 / 2)
\end{aligned}$$
We take its inverse Fourier transform to get the density $$p(z)$$,
-which turns out to be a Gaussian normal distribution,
-which is even already normalized:
+which turns out to be a Gaussian normal distribution
+and is even already normalized:
$$\begin{aligned}
p(z)
= \hat{\mathcal{F}}^{-1} \{\phi_z(k)\}
- &= \frac{1}{2 \pi} \int_{-\infty}^\infty \exp\!\big(\!-\! i k (z - \overline{z})\big) \exp(- k^2 \sigma_z^2 / 2) \dd{k}
+ &= \frac{1}{2 \pi} \int_{-\infty}^\infty \exp\!\big(\!-\! i k (z - \mu_z)\big) \exp(- k^2 \sigma_z^2 / 2) \dd{k}
\\
- &= \frac{1}{\sqrt{2 \pi \sigma_z^2}} \exp\!\Big(\!-\! \frac{(z - \overline{z})^2}{2 \sigma_z^2} \Big)
+ &= \frac{1}{\sqrt{2 \pi \sigma_z^2}} \exp\!\Big(\!-\! \frac{(z - \mu_z)^2}{2 \sigma_z^2} \Big)
\end{aligned}$$
Therefore, the sum of many independent variables tends to a normal distribution,
regardless of the densities of the individual variables.
+
## References
1. H. Gould, J. Tobochnik,
*Statistical and thermal physics*, 2nd edition,
diff --git a/source/know/concept/conditional-expectation/index.md b/source/know/concept/conditional-expectation/index.md
index f64fa72..cd40315 100644
--- a/source/know/concept/conditional-expectation/index.md
+++ b/source/know/concept/conditional-expectation/index.md
@@ -41,7 +41,7 @@ Where $$Q$$ is a renormalized probability function,
which assigns zero to all events incompatible with $$Y = y$$.
If we allow $$\Omega$$ to be continuous,
then from the definition $$\mathbf{E}[X]$$,
-we know that the following Lebesgue integral can be used,
+we know that the following *Lebesgue integral* can be used,
which we call $$f(y)$$:
$$\begin{aligned}
@@ -103,6 +103,7 @@ such that $$\mathbf{E}[X | \sigma(Y)] = f(Y)$$,
then $$Z = \mathbf{E}[X | \sigma(Y)]$$ is unique.
+
## Properties
A conditional expectation defined in this way has many useful properties,
diff --git a/source/know/concept/dispersive-broadening/index.md b/source/know/concept/dispersive-broadening/index.md
index 746eb6d..9642737 100644
--- a/source/know/concept/dispersive-broadening/index.md
+++ b/source/know/concept/dispersive-broadening/index.md
@@ -9,10 +9,10 @@ categories:
layout: "concept"
---
-In optical fibers, **dispersive broadening** is a (linear) effect
+In optical fibers, **dispersive broadening** is a linear effect
where group velocity dispersion (GVD) "smears out" a pulse in the time domain
due to the different group velocities of its frequencies,
-since pulses always have a non-zero width in the $$\omega$$-domain.
+since pulses always have a nonzero width in the $$\omega$$-domain.
No new frequencies are created.
A pulse envelope $$A(z, t)$$ inside a fiber must obey the nonlinear Schrödinger equation,
@@ -29,7 +29,7 @@ and consider a Gaussian initial condition:
$$\begin{aligned}
A(0, t)
- = \sqrt{P_0} \exp\!\Big(\!-\!\frac{t^2}{2 T_0^2}\Big)
+ = \sqrt{P_0} \exp\!\bigg(\!-\!\frac{t^2}{2 T_0^2}\bigg)
\end{aligned}$$
By [Fourier transforming](/know/concept/fourier-transform/) in $$t$$,
@@ -38,7 +38,8 @@ where it can be seen that the amplitude
decreases and the width increases with $$z$$:
$$\begin{aligned}
- A(z,t) = \sqrt{\frac{P_0}{1 - i \beta_2 z / T_0^2}}
+ A(z,t)
+ = \sqrt{\frac{P_0}{1 - i \beta_2 z / T_0^2}}
\exp\!\bigg(\! -\!\frac{t^2 / (2 T_0^2)}{1 + \beta_2^2 z^2 / T_0^4} \big( 1 + i \beta_2 z / T_0^2 \big) \bigg)
\end{aligned}$$
@@ -48,10 +49,12 @@ as the distance over which the half-width at $$1/e$$ of maximum power
(initially $$T_0$$) increases by a factor of $$\sqrt{2}$$:
$$\begin{aligned}
- T_0 \sqrt{1 + \beta_2^2 L_D^2 / T_0^4} = T_0 \sqrt{2}
+ T_0 \sqrt{1 + \beta_2^2 L_D^2 / T_0^4}
+ = T_0 \sqrt{2}
\qquad \implies \qquad
\boxed{
- L_D = \frac{T_0^2}{|\beta_2|}
+ L_D
+ \equiv \frac{T_0^2}{|\beta_2|}
}
\end{aligned}$$
@@ -68,7 +71,7 @@ where $$\phi(z, t)$$ is the phase of $$A(z, t) = \sqrt{P(z, t)} \exp(i \phi(z, t
$$\begin{aligned}
\omega_{\mathrm{GVD}}(z,t)
- = \pdv{}{t}\Big( \frac{\beta_2 z t^2 / (2 T_0^4)}{1 + \beta_2^2 z^2 / T_0^4} \Big)
+ = \pdv{}{t}\bigg( \frac{\beta_2 z t^2 / (2 T_0^4)}{1 + \beta_2^2 z^2 / T_0^4} \bigg)
= \frac{\beta_2 z / T_0^2}{1 + \beta_2^2 z^2 / T_0^4} \frac{t}{T_0^2}
\end{aligned}$$
@@ -76,7 +79,7 @@ This expression is linear in time, and depending on the sign of $$\beta_2$$,
frequencies on one side of the pulse arrive first,
and those on the other side arrive last.
The effect is stronger for smaller $$T_0$$:
-this makes sense, since short pulses are spectrally wider.
+this makes sense, since shorter pulses are spectrally wider.
The interaction between dispersion and [self-phase modulation](/know/concept/self-phase-modulation/)
leads to many interesting effects,
diff --git a/source/know/concept/holomorphic-function/index.md b/source/know/concept/holomorphic-function/index.md
index cf252c0..976758b 100644
--- a/source/know/concept/holomorphic-function/index.md
+++ b/source/know/concept/holomorphic-function/index.md
@@ -9,13 +9,13 @@ layout: "concept"
---
In complex analysis, a complex function $$f(z)$$ of a complex variable $$z$$
-is called **holomorphic** or **analytic** if it is complex differentiable in the
-neighbourhood of every point of its domain.
+is called **holomorphic** or **analytic** if it is **complex differentiable**
+in the vicinity of every point of its domain.
This is a very strong condition.
As a result, holomorphic functions are infinitely differentiable and
equal their Taylor expansion at every point. In physicists' terms,
-they are extremely "well-behaved" throughout their domain.
+they are very "well-behaved" throughout their domain.
More formally, a given function $$f(z)$$ is holomorphic in a certain region
if the following limit exists for all $$z$$ in that region,
@@ -23,14 +23,17 @@ and for all directions of $$\Delta z$$:
$$\begin{aligned}
\boxed{
- f'(z) = \lim_{\Delta z \to 0} \frac{f(z + \Delta z) - f(z)}{\Delta z}
+ f'(z)
+ = \lim_{\Delta z \to 0} \frac{f(z + \Delta z) - f(z)}{\Delta z}
}
\end{aligned}$$
We decompose $$f$$ into the real functions $$u$$ and $$v$$ of real variables $$x$$ and $$y$$:
$$\begin{aligned}
- f(z) = f(x + i y) = u(x, y) + i v(x, y)
+ f(z)
+ = f(x + i y)
+ = u(x, y) + i v(x, y)
\end{aligned}$$
Since we are free to choose the direction of $$\Delta z$$, we choose $$\Delta x$$ and $$\Delta y$$:
@@ -56,9 +59,9 @@ $$\begin{aligned}
}
\end{aligned}$$
-Therefore, a given function $$f(z)$$ is holomorphic if and only if its real
-and imaginary parts satisfy these equations. This gives an idea of how
-strict the criteria are to qualify as holomorphic.
+Therefore, a given function $$f(z)$$ is holomorphic if and only if
+its real and imaginary parts satisfy these equations.
+This gives an idea of how strict the criteria are to qualify as holomorphic.
@@ -70,7 +73,8 @@ provided that $$f(z)$$ is holomorphic for all $$z$$ in the area enclosed by $$C$
$$\begin{aligned}
\boxed{
- \oint_C f(z) \dd{z} = 0
+ \oint_C f(z) \dd{z}
+ = 0
}
\end{aligned}$$
@@ -86,34 +90,36 @@ $$\begin{aligned}
&= \oint_C u \dd{x} - v \dd{y} + i \oint_C v \dd{x} + u \dd{y}
\end{aligned}$$
-Using Green's theorem, we integrate over the area $$A$$ enclosed by $$C$$:
+Using *Green's theorem*, we integrate over the area $$A$$ enclosed by $$C$$:
$$\begin{aligned}
\oint_C f(z) \dd{z}
&= - \iint_A \pdv{v}{x} + \pdv{u}{y} \dd{x} \dd{y} + i \iint_A \pdv{u}{x} - \pdv{v}{y} \dd{x} \dd{y}
\end{aligned}$$
-Since $$f(z)$$ is holomorphic, $$u$$ and $$v$$ satisfy the Cauchy-Riemann
-equations, such that the integrands disappear and the final result is zero.
+Since $$f(z)$$ is holomorphic, $$u$$ and $$v$$ satisfy the Cauchy-Riemann equations,
+such that the integrands disappear and the final result is zero.
{% include proof/end.html id="proof-int-theorem" %}
-An interesting consequence is **Cauchy's integral formula**, which
-states that the value of $$f(z)$$ at an arbitrary point $$z_0$$ is
-determined by its values on an arbitrary contour $$C$$ around $$z_0$$:
+An interesting consequence is **Cauchy's integral formula**,
+which states that the value of $$f(z)$$ at an arbitrary point $$z_0$$
+is determined by its values on an arbitrary contour $$C$$ around $$z_0$$:
$$\begin{aligned}
\boxed{
- f(z_0) = \frac{1}{2 \pi i} \oint_C \frac{f(z)}{z - z_0} \dd{z}
+ f(z_0)
+ = \frac{1}{2 \pi i} \oint_C \frac{f(z)}{z - z_0} \dd{z}
}
\end{aligned}$$
{% include proof/start.html id="proof-int-formula" -%}
-Thanks to the integral theorem, we know that the shape and size
-of $$C$$ is irrelevant. Therefore we choose it to be a circle with radius $$r$$,
-such that the integration variable becomes $$z = z_0 + r e^{i \theta}$$. Then
-we integrate by substitution:
+Thanks to the integral theorem, we know that
+the shape and size of $$C$$ are irrelevant.
+Therefore we choose it to be a circle with radius $$r$$,
+such that the integration variable becomes $$z = z_0 + r e^{i \theta}$$.
+Then we integrate by substitution:
$$\begin{aligned}
\frac{1}{2 \pi i} \oint_C \frac{f(z)}{z - z_0} \dd{z}
diff --git a/source/know/concept/ion-sound-wave/index.md b/source/know/concept/ion-sound-wave/index.md
index 8749f1a..6a9dcff 100644
--- a/source/know/concept/ion-sound-wave/index.md
+++ b/source/know/concept/ion-sound-wave/index.md
@@ -49,7 +49,7 @@ $$\begin{aligned}
Where the perturbations $$n_{i1}$$, $$n_{e1}$$, $$\vb{u}_{i1}$$ and $$\phi_1$$ are tiny,
and the equilibrium components $$n_{i0}$$, $$n_{e0}$$, $$\vb{u}_{i0}$$ and $$\phi_0$$
-by definition satisfy:
+are assumed to satisfy:
$$\begin{aligned}
\pdv{n_{i0}}{t} = 0
@@ -63,11 +63,7 @@ $$\begin{aligned}
\phi_0 = 0
\end{aligned}$$
-Inserting this decomposition into the momentum equations
-yields new equations.
-Note that we will implicitly use $$\vb{u}_{i0} = 0$$
-to pretend that the [material derivative](/know/concept/material-derivative/)
-$$\mathrm{D}/\mathrm{D} t$$ is linear:
+Inserting this decomposition into the momentum equations yields new equations:
$$\begin{aligned}
m_i (n_{i0} \!+\! n_{i1}) \frac{\mathrm{D} (\vb{u}_{i0} \!+\! \vb{u}_{i1})}{\mathrm{D} t}
@@ -77,17 +73,19 @@ $$\begin{aligned}
&= - q_e (n_{e0} \!+\! n_{e1}) \nabla (\phi_0 \!+\! \phi_1) - \gamma_e k_B T_e \nabla (n_{e0} \!+\! n_{e1})
\end{aligned}$$
-Using the defined properties of the equilibrium components
-$$n_{i0}$$, $$n_{e0}$$, $$\vb{u}_{i0}$$ and $$\phi_0$$,
-and neglecting all products of perturbations for being small,
-this reduces to:
+Using the assumed properties of $$n_{i0}$$, $$n_{e0}$$, $$\vb{u}_{i0}$$ and $$\phi_0$$,
+and discarding products of perturbations for being too small,
+we arrive at the below equations.
+Our choice $$\vb{u}_{i0} = 0$$ lets us linearize
+the [material derivative](/know/concept/material-derivative/)
+$$\mathrm{D}/\mathrm{D} t = \ipdv{}{t}$$ for the ions:
$$\begin{aligned}
m_i n_{i0} \pdv{\vb{u}_{i1}}{t}
- &= - q_i n_{i0} \nabla \phi_1 - \gamma_i k_B T_i \nabla n_{i1}
+ &\approx - q_i n_{i0} \nabla \phi_1 - \gamma_i k_B T_i \nabla n_{i1}
\\
0
- &= - q_e n_{e0} \nabla \phi_1 - \gamma_e k_B T_e \nabla n_{e1}
+ &\approx - q_e n_{e0} \nabla \phi_1 - \gamma_e k_B T_e \nabla n_{e1}
\end{aligned}$$
Because we are interested in linear waves,
@@ -123,7 +121,7 @@ to get a relation between $$n_{e1}$$ and $$n_{e0}$$:
$$\begin{aligned}
i \vb{k} \gamma_e k_B T_e n_{e1}
= - i \vb{k} q_e n_{e0} \phi_1
- \quad \implies \quad
+ \qquad \implies \qquad
n_{e1}
= - \frac{q_e \phi_1}{\gamma_e k_B T_e} n_{e0}
\end{aligned}$$
@@ -159,13 +157,13 @@ $$\begin{aligned}
\approx \pdv{n_{i1}}{t} + n_{i0} \nabla \cdot \vb{u}_{i1}
\end{aligned}$$
-Then we insert our plane-wave ansatz,
+Into which we insert our plane-wave ansatz,
and substitute $$n_{i0} = n_0$$ as before, yielding:
$$\begin{aligned}
0
= - i \omega n_{i1} + i n_{i0} \vb{k} \cdot \vb{u}_{i1}
- \quad \implies \quad
+ \qquad \implies \qquad
\vb{k} \cdot \vb{u}_{i1}
= \omega \frac{n_{i1}}{n_{i0}}
= \omega \frac{q_e n_{i1} \phi_1}{k_B T_e n_{e1}}
@@ -187,9 +185,9 @@ $$\begin{gathered}
Finally, we would like to find an expression for $$n_{e1} / n_{i1}$$.
It cannot be $$1$$, because then $$\phi_1$$ could not be nonzero,
according to [Gauss' law](/know/concept/maxwells-equations/).
-Nevertheless, authors often ignore this fact,
+Nevertheless, some authors tend to ignore this fact,
thereby making the so-called **plasma approximation**.
-We will not, and therefore turn to Gauss' law:
+We will not, and thus turn to Gauss' law:
$$\begin{aligned}
\varepsilon_0 \nabla \cdot \vb{E}
@@ -244,7 +242,7 @@ $$\begin{aligned}
}
\end{aligned}$$
-Curiously, unlike a neutral gas,
+Curiously, unlike in a neutral gas,
this velocity is nonzero even if $$T_i = 0$$,
meaning that the waves still exist then.
In fact, usually the electron temperature $$T_e$$ dominates $$T_e \gg T_i$$,
diff --git a/source/know/concept/lagrange-multiplier/index.md b/source/know/concept/lagrange-multiplier/index.md
index a0b22aa..ce5418f 100644
--- a/source/know/concept/lagrange-multiplier/index.md
+++ b/source/know/concept/lagrange-multiplier/index.md
@@ -127,8 +127,22 @@ about the interdependence of a system of equations
then $$\lambda$$ is not even given an expression!
Hence it