summaryrefslogtreecommitdiff
path: root/latex/know
diff options
context:
space:
mode:
Diffstat (limited to 'latex/know')
-rw-r--r--latex/know/concept/hilbert-space/source.md193
-rw-r--r--latex/know/concept/legendre-transform/source.md6
-rw-r--r--latex/know/concept/partial-fraction-decomposition/source.md53
3 files changed, 249 insertions, 3 deletions
diff --git a/latex/know/concept/hilbert-space/source.md b/latex/know/concept/hilbert-space/source.md
new file mode 100644
index 0000000..7d2ea05
--- /dev/null
+++ b/latex/know/concept/hilbert-space/source.md
@@ -0,0 +1,193 @@
+% Hilbert space
+
+
+# Hilbert space
+
+A **Hilbert space**, also known as an **inner product space**, is an
+abstract **vector space** with a notion of length and angle.
+
+
+## Vector space
+
+An abstract **vector space** $\mathbb{V}$ is a generalization of the
+traditional concept of vectors as "arrows". It consists of a set of
+objects called **vectors** which support the following (familiar)
+operations:
+
++ **Vector addition**: the sum of two vectors $V$ and $W$, denoted $V + W$.
++ **Scalar multiplication**: product of a vector $V$ with a scalar $a$, denoted $a V$.
+
+In addition, for a given $\mathbb{V}$ to qualify as a proper vector
+space, these operations must obey the following axioms:
+
++ **Addition is associative**: $U + (V + W) = (U + V) + W$
++ **Addition is commutative**: $U + V = V + U$
++ **Addition has an identity**: there exists a $\mathbf{0}$ such that $V + 0 = V$
++ **Addition has an inverse**: for every $V$ there exists $-V$ so that $V + (-V) = 0$
++ **Multiplication is associative**: $a (b V) = (a b) V$
++ **Multiplication has an identity**: There exists a $1$ such that $1 V = V$
++ **Multiplication is distributive over scalars**: $(a + b)V = aV + bV$
++ **Multiplication is distributive over vectors**: $a (U + V) = a U + a V$
+
+A set of $N$ vectors $V_1, V_2, ..., V_N$ is **linearly independent** if
+the only way to satisfy the following relation is to set all the scalar coefficients $a_n = 0$:
+
+$$\begin{aligned}
+ \mathbf{0} = \sum_{n = 1}^N a_n V_n
+\end{aligned}$$
+
+In other words, these vectors cannot be expressed in terms of each
+other. Otherwise, they would be **linearly dependent**.
+
+A vector space $\mathbb{V}$ has **dimension** $N$ if only up to $N$ of
+its vectors can be linearly indepedent. All other vectors in
+$\mathbb{V}$ can then be written as a **linear combination** of these $N$
+so-called **basis vectors**.
+
+Let $\vu{e}_1, ..., \vu{e}_N$ be the basis vectors, then any
+vector $V$ in the same space can be **expanded** in the basis according to
+the unique "weights" $v_n$, known as the **components** of the vector $V$
+in that basis:
+
+$$\begin{aligned}
+ V = \sum_{n = 1}^N v_n \vu{e}_n
+\end{aligned}$$
+
+Using these, the vector space operations can then be implemented as follows:
+
+$$\begin{gathered}
+ V = \sum_{n = 1} v_n \vu{e}_n
+ \quad
+ W = \sum_{n = 1} w_n \vu{e}_n
+ \\
+ \quad \implies \quad
+ V + W = \sum_{n = 1}^N (v_n + w_n) \vu{e}_n
+ \qquad
+ a V = \sum_{n = 1}^N a v_n \vu{e}_n
+\end{gathered}$$
+
+
+## Inner product
+
+A given vector space $\mathbb{V}$ can be promoted to a **Hilbert space**
+or **inner product space** if it supports an operation $\braket{U}{V}$
+called the **inner product**, which takes two vectors and returns a
+scalar, and has the following properties:
+
++ **Skew symmetry**: $\braket{U}{V} = (\braket{V}{U})^*$, where ${}^*$ is the complex conjugate.
++ **Positive semidefiniteness**: $\braket{V}{V} \ge 0$, and $\braket{V}{V} = 0$ if $V = \mathbf{0}$.
++ **Linearity in second operand**: $\braket{U}{(a V + b W)} = a \braket{U}{V} + b \braket{U}{W}$.
+
+The inner product describes the lengths and angles of vectors, and in
+Euclidean space it is implemented by the dot product.
+
+The **magnitude** or **norm** $|V|$ of a vector $V$ is given by
+$|V| = \sqrt{\braket{V}{V}}$ and represents the real positive length of $V$.
+A **unit vector** has a norm of 1.
+
+Two vectors $U$ and $V$ are **orthogonal** if their inner product
+$\braket{U}{V} = 0$. If in addition to being orthogonal, $|U| = 1$ and
+$|V| = 1$, then $U$ and $V$ are known as **orthonormal** vectors.
+
+Orthonormality is a desirable property for basis vectors, so if they are
+not already orthonormal, it is common to manually derive a new
+orthonormal basis from them using e.g. the Gram-Schmidt method.
+
+As for the implementation of the inner product, it is given by:
+
+$$\begin{gathered}
+ V = \sum_{n = 1}^N v_n \vu{e}_n
+ \quad
+ W = \sum_{n = 1}^N w_n \vu{e}_n
+ \\
+ \quad \implies \quad
+ \braket{V}{W} = \sum_{n = 1}^N \sum_{m = 1}^N v_n^* w_m \braket{\vu{e}_n}{\vu{e}_j}
+\end{gathered}$$
+
+If the basis vectors $\vu{e}_1, ..., \vu{e}_N$ are already
+orthonormal, this reduces to:
+
+$$\begin{aligned}
+ \braket{V}{W} = \sum_{n = 1}^N v_n^* w_n
+\end{aligned}$$
+
+As it turns out, the components $v_n$ are given by the inner product
+with $\vu{e}_n$, where $\delta_{nm}$ is the Kronecker delta:
+
+$$\begin{aligned}
+ \braket{\vu{e}_n}{V} = \sum_{m = 1}^N \delta_{nm} v_m = v_n
+\end{aligned}$$
+
+
+## Infinite dimensions
+
+As the dimensionality $N$ tends to infinity, things may or may not
+change significantly, depending on whether $N$ is **countably** or
+**uncountably** infinite.
+
+In the former case, not much changes: the infinitely many **discrete**
+basis vectors $\vu{e}_n$ can all still be made orthonormal as usual,
+and as before:
+
+$$\begin{aligned}
+ V = \sum_{n = 1}^\infty v_n \vu{e}_n
+\end{aligned}$$
+
+A good example of such a countably-infinitely-dimensional basis are the
+solution functions of a Sturm-Liouville problem.
+
+However, if the dimensionality is uncountably infinite, the basis
+vectors are **continuous** and cannot be labeled by $n$. For example, all
+complex functions $f(x)$ defined for $x \in [a, b]$ which
+satisfy $f(a) = f(b) = 0$ form such a vector space.
+In this case $f(x)$ is expanded as follows, where $x$ is a basis vector:
+
+$$\begin{aligned}
+ f(x) = \int_a^b \braket{x}{f} \dd{x}
+\end{aligned}$$
+
+Similarly, the inner product $\braket{f}{g}$ must also be redefined as
+follows:
+
+$$\begin{aligned}
+ \braket{f}{g} = \int_a^b f^*(x) \: g(x) \dd{x}
+\end{aligned}$$
+
+The concept of orthonormality must be also weakened. A finite function
+$f(x)$ can be normalized as usual, but the basis vectors $x$ themselves
+cannot, since each represents an infinitesimal section of the real line.
+
+The rationale in this case is that the identity operator $\hat{I}$ must
+be preserved, which is given here in [Dirac notation](/know/concept/dirac-notation/):
+
+$$\begin{aligned}
+ \hat{I} = \int_a^b \ket{\xi} \bra{\xi} \dd{\xi}
+\end{aligned}$$
+
+Applying the identity operator to $f(x)$ should just give $f(x)$ again:
+
+$$\begin{aligned}
+ f(x) = \braket{x}{f} = \matrixel{x}{\hat{I}}{f}
+ = \int_a^b \braket{x}{\xi} \braket{\xi}{f} \dd{\xi}
+ = \int_a^b \braket{x}{\xi} f(\xi) \dd{\xi}
+\end{aligned}$$
+
+For the latter integral to turn into $f(x)$, it is clear that
+$\braket{x}{\xi}$ must be a [Dirac delta function](/know/concept/dirac-delta-function/),
+i.e $\braket{x}{\xi} = \delta(x - \xi)$:
+
+$$\begin{aligned}
+ \int_a^b \braket{x}{\xi} f(\xi) \dd{\xi}
+ = \int_a^b \delta(x - \xi) f(\xi) \dd{\xi}
+ = f(x)
+\end{aligned}$$
+
+Consequently, $\braket{x}{\xi} = 0$ if $x \neq \xi$ as expected for an
+orthogonal set of basis vectors, but if $x = \xi$ the inner product
+$\braket{x}{\xi}$ is infinite, unlike earlier.
+
+Technically, because the basis vectors $x$ cannot be normalized, they
+are not members of a Hilbert space, but rather of a superset called a
+**rigged Hilbert space**. Such vectors have no finite inner product with
+themselves, but do have one with all vectors from the actual Hilbert
+space.
diff --git a/latex/know/concept/legendre-transform/source.md b/latex/know/concept/legendre-transform/source.md
index 954b6fc..20afdf7 100644
--- a/latex/know/concept/legendre-transform/source.md
+++ b/latex/know/concept/legendre-transform/source.md
@@ -5,9 +5,9 @@
The **Legendre transform** of a function $f(x)$ is a new function $L(f')$,
which depends only on the derivative $f'(x)$ of $f(x)$, and from which
-the original function $f(x)$ can be reconstructed. The point is, just
-like other transforms (e.g. Fourier), that $L(f')$ contains the same
-information as $f(x)$, just in a different form.
+the original function $f(x)$ can be reconstructed. The point is,
+analogously to other transforms (e.g. [Fourier](/know/concept/fourier-transform/)),
+that $L(f')$ contains the same information as $f(x)$, just in a different form.
Let us choose an arbitrary point $x_0 \in [a, b]$ in the domain of
$f(x)$. Consider a line $y(x)$ tangent to $f(x)$ at $x = x_0$, which has
diff --git a/latex/know/concept/partial-fraction-decomposition/source.md b/latex/know/concept/partial-fraction-decomposition/source.md
new file mode 100644
index 0000000..aa03f9c
--- /dev/null
+++ b/latex/know/concept/partial-fraction-decomposition/source.md
@@ -0,0 +1,53 @@
+% Partial fraction decomposition
+
+
+# Partial fraction decomposition
+
+*Partial fraction decomposition* or *expansion* is a method to rewrite a
+quotient of two polynomials $g(x)$ and $h(x)$, where the numerator
+$g(x)$ is of lower order than $h(x)$, as a sum of fractions with $x$ in
+the denominator:
+
+$$\begin{aligned}
+ f(x) = \frac{g(x)}{h(x)} = \frac{c_1}{x - h_1} + \frac{c_2}{x - h_2} + ...
+\end{aligned}$$
+
+Where $h_n$ etc. are the roots of the denominator $h(x)$. If all $N$ of
+these roots are distinct, then it is sufficient to simply posit:
+
+$$\begin{aligned}
+ \boxed{
+ f(x) = \frac{c_1}{x - h_1} + \frac{c_2}{x - h_2} + ... + \frac{c_N}{x - h_N}
+ }
+\end{aligned}$$
+
+Then the constant coefficients $c_n$ can either be found the hard way,
+by multiplying the denominators around and solving a system of $N$
+equations, or the easy way by using the following trick:
+
+$$\begin{aligned}
+ \boxed{
+ c_n = \lim_{x \to h_n} \big( f(x) (x - h_n) \big)
+ }
+\end{aligned}$$
+
+If $h_1$ is a root with multiplicity $m > 1$, then the sum takes the
+form of:
+
+$$\begin{aligned}
+ \boxed{
+ f(x)
+ = \frac{c_{1,1}}{x - h_1} + \frac{c_{1,2}}{(x - h_1)^2} + ...
+ }
+\end{aligned}$$
+
+Where $c_{1,j}$ are found by putting the terms on a common denominator,
+e.g.:
+
+$$\begin{aligned}
+ \frac{c_{1,1}}{x - h_1} + \frac{c_{1,2}}{(x - h_1)^2}
+ = \frac{c_{1,1} (x - h_1) + c_{1,2}}{(x - h_1)^2}
+\end{aligned}$$
+
+And then, using the linear independence of $x^0, x^1, x^2, ...$, solving
+a system of $m$ equations to find all $c_{1,1}, ..., c_{1,m}$.