Categories: Mathematics, Quantum mechanics.

Hilbert space

A Hilbert space, also called an inner product space, is an abstract vector space with a notion of length and angle.

Vector space

An abstract vector space V\mathbb{V} is a generalization of the traditional concept of vectors as “arrows”. It consists of a set of objects called vectors which support the following (familiar) operations:

In addition, for a given V\mathbb{V} to qualify as a proper vector space, these operations must obey the following axioms:

A set of NN vectors V1,V2,...,VNV_1, V_2, ..., V_N is linearly independent if the only way to satisfy the following relation is to set all the scalar coefficients an=0a_n = 0:

0=n=1NanVn\begin{aligned} \mathbf{0} = \sum_{n = 1}^N a_n V_n \end{aligned}

In other words, these vectors cannot be expressed in terms of each other. Otherwise, they would be linearly dependent.

A vector space V\mathbb{V} has dimension NN if only up to NN of its vectors can be linearly indepedent. All other vectors in V\mathbb{V} can then be written as a linear combination of these NN basis vectors.

Let e^1,...,e^N\vu{e}_1, ..., \vu{e}_N be the basis vectors, then any vector VV in the same space can be expanded in the basis according to the unique weights vnv_n, known as the components of VV in that basis:

V=n=1Nvne^n\begin{aligned} V = \sum_{n = 1}^N v_n \vu{e}_n \end{aligned}

Using these, the vector space operations can then be implemented as follows:

V=n=1vne^nW=n=1wne^n    V+W=n=1N(vn+wn)e^naV=n=1Navne^n\begin{gathered} V = \sum_{n = 1} v_n \vu{e}_n \quad W = \sum_{n = 1} w_n \vu{e}_n \\ \quad \implies \quad V + W = \sum_{n = 1}^N (v_n + w_n) \vu{e}_n \qquad a V = \sum_{n = 1}^N a v_n \vu{e}_n \end{gathered}

Inner product

A given vector space V\mathbb{V} can be promoted to a Hilbert space or inner product space if it supports an operation U|V\Inprod{U}{V} called the inner product, which takes two vectors and returns a scalar, and has the following properties:

The inner product describes the lengths and angles of vectors, and in Euclidean space it is implemented by the dot product.

The magnitude or norm V|V| of a vector VV is given by V=V|V|V| = \sqrt{\Inprod{V}{V}} and represents the real positive length of VV. A unit vector has a norm of 1.

Two vectors UU and VV are orthogonal if their inner product U|V=0\Inprod{U}{V} = 0. If in addition to being orthogonal, U=1|U| = 1 and V=1|V| = 1, then UU and VV are known as orthonormal vectors.

Orthonormality is desirable for basis vectors, so if they are not already like that, it is common to manually turn them into a new orthonormal basis using e.g. the Gram-Schmidt method.

As for the implementation of the inner product, it is given by:

V=n=1Nvne^nW=n=1Nwne^n    V|W=n=1Nm=1Nvnwme^n|e^j\begin{gathered} V = \sum_{n = 1}^N v_n \vu{e}_n \quad W = \sum_{n = 1}^N w_n \vu{e}_n \\ \quad \implies \quad \Inprod{V}{W} = \sum_{n = 1}^N \sum_{m = 1}^N v_n^* w_m \Inprod{\vu{e}_n}{\vu{e}_j} \end{gathered}

If the basis vectors e^1,...,e^N\vu{e}_1, ..., \vu{e}_N are already orthonormal, this reduces to:

V|W=n=1Nvnwn\begin{aligned} \Inprod{V}{W} = \sum_{n = 1}^N v_n^* w_n \end{aligned}

As it turns out, the components vnv_n are given by the inner product with e^n\vu{e}_n, where δnm\delta_{nm} is the Kronecker delta:

e^n|V=m=1Nδnmvm=vn\begin{aligned} \Inprod{\vu{e}_n}{V} = \sum_{m = 1}^N \delta_{nm} v_m = v_n \end{aligned}

Infinite dimensions

As the dimensionality NN tends to infinity, things may or may not change significantly, depending on whether NN is countably or uncountably infinite.

In the former case, not much changes: the infinitely many discrete basis vectors e^n\vu{e}_n can all still be made orthonormal as usual, and as before:

V=n=1vne^n\begin{aligned} V = \sum_{n = 1}^\infty v_n \vu{e}_n \end{aligned}

A good example of such a countably-infinitely-dimensional basis are the solution eigenfunctions of a Sturm-Liouville problem.

However, if the dimensionality is uncountably infinite, the basis vectors are continuous and cannot be labeled by nn. For example, all complex functions f(x)f(x) defined for x[a,b]x \in [a, b] which satisfy f(a)=f(b)=0f(a) = f(b) = 0 form such a vector space. In this case f(x)f(x) is expanded as follows, where xx is a basis vector:

f(x)=abx|fdx\begin{aligned} f(x) = \int_a^b \Inprod{x}{f} \dd{x} \end{aligned}

Similarly, the inner product f|g\Inprod{f}{g} must also be redefined as follows:

f|g=abf(x)g(x)dx\begin{aligned} \Inprod{f}{g} = \int_a^b f^*(x) \: g(x) \dd{x} \end{aligned}

The concept of orthonormality must be also weakened. A finite function f(x)f(x) can be normalized as usual, but the basis vectors xx themselves cannot, since each represents an infinitesimal section of the real line.

The rationale in this case is that action of the identity operator I^\hat{I} must be preserved, which is given here in Dirac notation:

I^=abξξdξ\begin{aligned} \hat{I} = \int_a^b \Ket{\xi} \Bra{\xi} \dd{\xi} \end{aligned}

Applying the identity operator to f(x)f(x) should just give f(x)f(x) again:

f(x)=x|f=xI^f=abx|ξξ|fdξ=abx|ξf(ξ)dξ\begin{aligned} f(x) = \Inprod{x}{f} = \matrixel{x}{\hat{I}}{f} = \int_a^b \Inprod{x}{\xi} \Inprod{\xi}{f} \dd{\xi} = \int_a^b \Inprod{x}{\xi} f(\xi) \dd{\xi} \end{aligned}

Since we want the latter integral to reduce to f(x)f(x), it is plain to see that x|ξ\Inprod{x}{\xi} can only be a Dirac delta function, i.e x|ξ=δ(xξ)\Inprod{x}{\xi} = \delta(x - \xi):

abx|ξf(ξ)dξ=abδ(xξ)f(ξ)dξ=f(x)\begin{aligned} \int_a^b \Inprod{x}{\xi} f(\xi) \dd{\xi} = \int_a^b \delta(x - \xi) f(\xi) \dd{\xi} = f(x) \end{aligned}

Consequently, x|ξ=0\Inprod{x}{\xi} = 0 if xξx \neq \xi as expected for an orthogonal set of vectors, but if x=ξx = \xi the inner product x|ξ\Inprod{x}{\xi} is infinite, unlike earlier.

Technically, because the basis vectors xx cannot be normalized, they are not members of a Hilbert space, but rather of a superset called a rigged Hilbert space. Such vectors have no finite inner product with themselves, but do have one with all vectors from the actual Hilbert space.