summaryrefslogtreecommitdiff
path: root/content/know/concept/pulay-mixing/index.pdc
diff options
context:
space:
mode:
authorPrefetch2021-07-01 22:21:26 +0200
committerPrefetch2021-07-01 22:21:26 +0200
commit805718880c936d778c99fe0d5cfdb238342a83c7 (patch)
tree194d1cd1a3c21600dfa5371d6935dbc2bfd12c5c /content/know/concept/pulay-mixing/index.pdc
parentf9cce7d563d0ea2ac591c31ff7d248ad3d02d1ac (diff)
Expand knowledge base
Diffstat (limited to 'content/know/concept/pulay-mixing/index.pdc')
-rw-r--r--content/know/concept/pulay-mixing/index.pdc13
1 files changed, 6 insertions, 7 deletions
diff --git a/content/know/concept/pulay-mixing/index.pdc b/content/know/concept/pulay-mixing/index.pdc
index 8daa54f..4e7a411 100644
--- a/content/know/concept/pulay-mixing/index.pdc
+++ b/content/know/concept/pulay-mixing/index.pdc
@@ -16,8 +16,8 @@ by generating a series $\rho_1$, $\rho_2$, etc.
converging towards the desired solution $\rho_*$.
**Pulay mixing**, also often called
**direct inversion in the iterative subspace** (DIIS),
-is an effective method to speed up convergence,
-which also helps to avoid periodic divergences.
+can speed up the convergence for some types of problems,
+and also helps to avoid periodic divergences.
The key concept it relies on is the **residual vector** $R_n$
of the $n$th iteration, which in some way measures the error of the current $\rho_n$.
@@ -113,17 +113,16 @@ $\lambda = - \braket{R_{n+1}}{R_{n+1}}$,
where $R_{n+1}$ is the *predicted* residual of the next iteration,
subject to the two assumptions.
-This method is very effective.
However, in practice, the earlier inputs $\rho_1$, $\rho_2$, etc.
are much further from $\rho_*$ than $\rho_n$,
-so usually only the most recent $N$ inputs $\rho_{n - N}$, ..., $\rho_n$ are used:
+so usually only the most recent $N\!+\!1$ inputs $\rho_{n - N}$, ..., $\rho_n$ are used:
$$\begin{aligned}
\rho_{n+1}
- = \sum_{m = N}^n \alpha_m \rho_m
+ = \sum_{m = n-N}^n \alpha_m \rho_m
\end{aligned}$$
-You might be confused by the absence of all $\rho_m^\mathrm{new}$
+You might be confused by the absence of any $\rho_m^\mathrm{new}$
in the creation of $\rho_{n+1}$, as if the iteration's outputs are being ignored.
This is due to the first assumption,
which states that $\rho_n^\mathrm{new}$ are $\rho_n$ are already similar,
@@ -155,7 +154,7 @@ while still giving more weight to iterations with smaller residuals.
Pulay mixing is very effective for certain types of problems,
e.g. density functional theory,
-where it can accelerate convergence by up to one order of magnitude!
+where it can accelerate convergence by up to two orders of magnitude!