summaryrefslogtreecommitdiff
path: root/source/know/concept/gram-schmidt-method/index.md
blob: 1f5a0bd5784331d4f0e83a81c5933a8860ed4cb6 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
title: "Gram-Schmidt method"
sort_title: "Gram-Schmidt method"
date: 2021-02-22
categories:
- Mathematics
- Algorithms
layout: "concept"
---

Given a set of $$N$$ linearly independent vectors $$\ket{V_1}, \ket{V_2}, ...$$
from a [Hilbert space](/know/concept/hilbert-space/),
the **Gram-Schmidt method** is an algorithm that turns them
into an orthonormal set $$\ket{n_1}, \ket{n_2}, ...$$ as follows
(in [Dirac notation](/know/concept/dirac-notation/)):

1.  Take the first vector $$\ket{V_1}$$ and normalize it to get $$\ket{n_1}$$:
    
    $$\begin{aligned}
        \ket{n_1}
        = \frac{\ket{V_1}}{\sqrt{\inprod{V_1}{V_1}}}
    \end{aligned}$$
    
2.  Begin loop. Take the next input vector $$\ket{V_j}$$, and
    subtract from it its projection onto every already-processed vector:
    
    $$\begin{aligned}
        \ket{g_j}
        = \ket{V_j} - \ket{n_1} \inprod{n_1}{V_j} - \ket{n_2} \inprod{n_2}{V_j} - ... - \ket{n_{j-1}} \inprod{n_{j-1}}{V_j}
    \end{aligned}$$
    
    This leaves only the part of $$\ket{V_j}$$
    that is orthogonal to all previous $$\ket{n_k}$$.
    This why the input vectors must be linearly independent;
    otherwise $$\ket{g_j}$$ could become zero.
    
    On a computer, the resulting $$\ket{g_j}$$ will
    not be perfectly orthogonal due to rounding errors.
    The above description of step #2 is particularly bad.
    A better approach is:
    
    $$\begin{aligned}
        \ket{g_j^{(1)}}
        &= \ket{V_j} - \ket{n_1} \inprod{n_1}{V_j}
        \\
        \ket{g_j^{(2)}}
        &= \ket{g_j^{(1)}} - \ket{n_2} \inprod{n_2}{g_j^{(1)}}
        \\
        \vdots
        \\
        \ket{g_j}
        = \ket{g_j^{(j-1)}}
        &= \ket{g_j^{(j-2)}} - \ket{n_{j-2}} \inprod{n_{j-2}}{g_j^{(j-2)}}
    \end{aligned}$$
    
    In other words, instead of projecting $$\ket{V_j}$$ directly onto all $$\ket{n_k}$$,
    we instead project only the part of $$\ket{V_j}$$ that has already been made orthogonal
    to all previous $$\ket{n_m}$$ with $$m < k$$.
    This is known as the **modified Gram-Schmidt method**.
    
3.  Normalize the resulting ortho*gonal* vector $$\ket{g_j}$$ to make it ortho*normal*:
    
    $$\begin{aligned}
        \ket{n_j}
        = \frac{\ket{g_j}}{\sqrt{\inprod{g_j}{g_j}}}
    \end{aligned}$$
    
4.  Loop back to step 2, taking the next vector $$\ket{V_{j+1}}$$,
    until all $$N$$ have been processed.



## References
1.  R. Shankar,
    *Principles of quantum mechanics*, 2nd edition,
    Springer.