Given a set of linearly independent vectors
from a Hilbert space,
the Gram-Schmidt method is an algorithm that turns them
into an orthonormal set as follows
(in Dirac notation):
Take the first vector and normalize it to get :
Begin loop. Take the next input vector , and
subtract from it its projection onto every already-processed vector:
This leaves only the part of
that is orthogonal to all previous .
This why the input vectors must be linearly independent;
otherwise could become zero.
On a computer, the resulting will
not be perfectly orthogonal due to rounding errors.
The above description of step #2 is particularly bad.
A better approach is:
In other words, instead of projecting directly onto all ,
we instead project only the part of that has already been made orthogonal
to all previous with .
This is known as the modified Gram-Schmidt method.
Normalize the resulting orthogonal vector to make it orthonormal:
Loop back to step 2, taking the next vector ,
until all have been processed.
- R. Shankar,
Principles of quantum mechanics, 2nd edition,