1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
|
% Hilbert space
# Hilbert space
A **Hilbert space**, also known as an **inner product space**, is an
abstract **vector space** with a notion of length and angle.
## Vector space
An abstract **vector space** $\mathbb{V}$ is a generalization of the
traditional concept of vectors as "arrows". It consists of a set of
objects called **vectors** which support the following (familiar)
operations:
+ **Vector addition**: the sum of two vectors $V$ and $W$, denoted $V + W$.
+ **Scalar multiplication**: product of a vector $V$ with a scalar $a$, denoted $a V$.
In addition, for a given $\mathbb{V}$ to qualify as a proper vector
space, these operations must obey the following axioms:
+ **Addition is associative**: $U + (V + W) = (U + V) + W$
+ **Addition is commutative**: $U + V = V + U$
+ **Addition has an identity**: there exists a $\mathbf{0}$ such that $V + 0 = V$
+ **Addition has an inverse**: for every $V$ there exists $-V$ so that $V + (-V) = 0$
+ **Multiplication is associative**: $a (b V) = (a b) V$
+ **Multiplication has an identity**: There exists a $1$ such that $1 V = V$
+ **Multiplication is distributive over scalars**: $(a + b)V = aV + bV$
+ **Multiplication is distributive over vectors**: $a (U + V) = a U + a V$
A set of $N$ vectors $V_1, V_2, ..., V_N$ is **linearly independent** if
the only way to satisfy the following relation is to set all the scalar coefficients $a_n = 0$:
$$\begin{aligned}
\mathbf{0} = \sum_{n = 1}^N a_n V_n
\end{aligned}$$
In other words, these vectors cannot be expressed in terms of each
other. Otherwise, they would be **linearly dependent**.
A vector space $\mathbb{V}$ has **dimension** $N$ if only up to $N$ of
its vectors can be linearly indepedent. All other vectors in
$\mathbb{V}$ can then be written as a **linear combination** of these $N$ **basis vectors**.
Let $\vu{e}_1, ..., \vu{e}_N$ be the basis vectors, then any
vector $V$ in the same space can be **expanded** in the basis according to
the unique weights $v_n$, known as the **components** of $V$
in that basis:
$$\begin{aligned}
V = \sum_{n = 1}^N v_n \vu{e}_n
\end{aligned}$$
Using these, the vector space operations can then be implemented as follows:
$$\begin{gathered}
V = \sum_{n = 1} v_n \vu{e}_n
\quad
W = \sum_{n = 1} w_n \vu{e}_n
\\
\quad \implies \quad
V + W = \sum_{n = 1}^N (v_n + w_n) \vu{e}_n
\qquad
a V = \sum_{n = 1}^N a v_n \vu{e}_n
\end{gathered}$$
## Inner product
A given vector space $\mathbb{V}$ can be promoted to a **Hilbert space**
or **inner product space** if it supports an operation $\braket{U}{V}$
called the **inner product**, which takes two vectors and returns a
scalar, and has the following properties:
+ **Skew symmetry**: $\braket{U}{V} = (\braket{V}{U})^*$, where ${}^*$ is the complex conjugate.
+ **Positive semidefiniteness**: $\braket{V}{V} \ge 0$, and $\braket{V}{V} = 0$ if $V = \mathbf{0}$.
+ **Linearity in second operand**: $\braket{U}{(a V + b W)} = a \braket{U}{V} + b \braket{U}{W}$.
The inner product describes the lengths and angles of vectors, and in
Euclidean space it is implemented by the dot product.
The **magnitude** or **norm** $|V|$ of a vector $V$ is given by
$|V| = \sqrt{\braket{V}{V}}$ and represents the real positive length of $V$.
A **unit vector** has a norm of 1.
Two vectors $U$ and $V$ are **orthogonal** if their inner product
$\braket{U}{V} = 0$. If in addition to being orthogonal, $|U| = 1$ and
$|V| = 1$, then $U$ and $V$ are known as **orthonormal** vectors.
Orthonormality is desirable for basis vectors, so if they are
not already orthonormal, it is common to manually derive a new
orthonormal basis from them using e.g. the [Gram-Schmidt method](/know/concept/gram-schmidt-method).
As for the implementation of the inner product, it is given by:
$$\begin{gathered}
V = \sum_{n = 1}^N v_n \vu{e}_n
\quad
W = \sum_{n = 1}^N w_n \vu{e}_n
\\
\quad \implies \quad
\braket{V}{W} = \sum_{n = 1}^N \sum_{m = 1}^N v_n^* w_m \braket{\vu{e}_n}{\vu{e}_j}
\end{gathered}$$
If the basis vectors $\vu{e}_1, ..., \vu{e}_N$ are already
orthonormal, this reduces to:
$$\begin{aligned}
\braket{V}{W} = \sum_{n = 1}^N v_n^* w_n
\end{aligned}$$
As it turns out, the components $v_n$ are given by the inner product
with $\vu{e}_n$, where $\delta_{nm}$ is the Kronecker delta:
$$\begin{aligned}
\braket{\vu{e}_n}{V} = \sum_{m = 1}^N \delta_{nm} v_m = v_n
\end{aligned}$$
## Infinite dimensions
As the dimensionality $N$ tends to infinity, things may or may not
change significantly, depending on whether $N$ is **countably** or
**uncountably** infinite.
In the former case, not much changes: the infinitely many **discrete**
basis vectors $\vu{e}_n$ can all still be made orthonormal as usual,
and as before:
$$\begin{aligned}
V = \sum_{n = 1}^\infty v_n \vu{e}_n
\end{aligned}$$
A good example of such a countably-infinitely-dimensional basis are the
solution functions of a Sturm-Liouville problem.
However, if the dimensionality is uncountably infinite, the basis
vectors are **continuous** and cannot be labeled by $n$. For example, all
complex functions $f(x)$ defined for $x \in [a, b]$ which
satisfy $f(a) = f(b) = 0$ form such a vector space.
In this case $f(x)$ is expanded as follows, where $x$ is a basis vector:
$$\begin{aligned}
f(x) = \int_a^b \braket{x}{f} \dd{x}
\end{aligned}$$
Similarly, the inner product $\braket{f}{g}$ must also be redefined as
follows:
$$\begin{aligned}
\braket{f}{g} = \int_a^b f^*(x) \: g(x) \dd{x}
\end{aligned}$$
The concept of orthonormality must be also weakened. A finite function
$f(x)$ can be normalized as usual, but the basis vectors $x$ themselves
cannot, since each represents an infinitesimal section of the real line.
The rationale in this case is that action of the identity operator $\hat{I}$ must
be preserved, which is given here in [Dirac notation](/know/concept/dirac-notation/):
$$\begin{aligned}
\hat{I} = \int_a^b \ket{\xi} \bra{\xi} \dd{\xi}
\end{aligned}$$
Applying the identity operator to $f(x)$ should just give $f(x)$ again:
$$\begin{aligned}
f(x) = \braket{x}{f} = \matrixel{x}{\hat{I}}{f}
= \int_a^b \braket{x}{\xi} \braket{\xi}{f} \dd{\xi}
= \int_a^b \braket{x}{\xi} f(\xi) \dd{\xi}
\end{aligned}$$
For the latter integral to turn into $f(x)$, it is plain to see that
$\braket{x}{\xi}$ must be a [Dirac delta function](/know/concept/dirac-delta-function/),
i.e $\braket{x}{\xi} = \delta(x - \xi)$:
$$\begin{aligned}
\int_a^b \braket{x}{\xi} f(\xi) \dd{\xi}
= \int_a^b \delta(x - \xi) f(\xi) \dd{\xi}
= f(x)
\end{aligned}$$
Consequently, $\braket{x}{\xi} = 0$ if $x \neq \xi$ as expected for an
orthogonal set of basis vectors, but if $x = \xi$ the inner product
$\braket{x}{\xi}$ is infinite, unlike earlier.
Technically, because the basis vectors $x$ cannot be normalized, they
are not members of a Hilbert space, but rather of a superset called a
**rigged Hilbert space**. Such vectors have no finite inner product with
themselves, but do have one with all vectors from the actual Hilbert
space.
|