Categories: Algorithms, Mathematics.
GramSchmidt method
Given a set of $N$ linearly independent vectors $\ket{V_1}, \ket{V_2}, ...$ from a Hilbert space, the GramSchmidt method is an algorithm that turns them into an orthonormal set $\ket{n_1}, \ket{n_2}, ...$ as follows (in Dirac notation):

Take the first vector $\ket{V_1}$ and normalize it to get $\ket{n_1}$:
$\begin{aligned} \ket{n_1} = \frac{\ket{V_1}}{\sqrt{\inprod{V_1}{V_1}}} \end{aligned}$ 
Begin loop. Take the next input vector $\ket{V_j}$, and subtract from it its projection onto every alreadyprocessed vector:
$\begin{aligned} \ket{g_j} = \ket{V_j}  \ket{n_1} \inprod{n_1}{V_j}  \ket{n_2} \inprod{n_2}{V_j}  ...  \ket{n_{j1}} \inprod{n_{j1}}{V_j} \end{aligned}$This leaves only the part of $\ket{V_j}$ that is orthogonal to all previous $\ket{n_k}$. This why the input vectors must be linearly independent; otherwise $\ket{g_j}$ could become zero.
On a computer, the resulting $\ket{g_j}$ will not be perfectly orthogonal due to rounding errors. The above description of step #2 is particularly bad. A better approach is:
$\begin{aligned} \ket{g_j^{(1)}} &= \ket{V_j}  \ket{n_1} \inprod{n_1}{V_j} \\ \ket{g_j^{(2)}} &= \ket{g_j^{(1)}}  \ket{n_2} \inprod{n_2}{g_j^{(1)}} \\ \vdots \\ \ket{g_j} = \ket{g_j^{(j1)}} &= \ket{g_j^{(j2)}}  \ket{n_{j2}} \inprod{n_{j2}}{g_j^{(j2)}} \end{aligned}$In other words, instead of projecting $\ket{V_j}$ directly onto all $\ket{n_k}$, we instead project only the part of $\ket{V_j}$ that has already been made orthogonal to all previous $\ket{n_m}$ with $m < k$. This is known as the modified GramSchmidt method.

Normalize the resulting orthogonal vector $\ket{g_j}$ to make it orthonormal:
$\begin{aligned} \ket{n_j} = \frac{\ket{g_j}}{\sqrt{\inprod{g_j}{g_j}}} \end{aligned}$ 
Loop back to step 2, taking the next vector $\ket{V_{j+1}}$, until all $N$ have been processed.
References
 R. Shankar, Principles of quantum mechanics, 2nd edition, Springer.