Direct inversion in the iterative subspace (DIIS) was originally developed by
Pulay for accelerating SCF convergence.^{760} Subsequently, Csaszar
and Pulay used a similar scheme for geometry optimization, which they termed
GDIIS.^{197} The method is somewhat different from the usual
quasi-Newton type approach and is included in Optimize as an alternative to the
EF algorithm. Tests indicate that its performance is similar to EF, at least
for small systems; however there is rarely an advantage in using GDIIS in
preference to EF.

In GDIIS, geometries ${\mathbf{x}}_{i}$ generated in previous optimization cycles are linearly combined to find the “best” geometry on the current cycle

$${\mathbf{x}}_{n}=\sum _{i=1}^{m}{c}_{i}{\mathbf{x}}_{i}$$ | (A.33) |

where the problem is to find the best values for the coefficients ${c}_{i}$.

If we express each geometry, ${\mathbf{x}}_{i}$, by its deviation from the sought-after
final geometry, ${\mathbf{x}}_{f}$, *i.e.*, ${\mathbf{x}}_{f}={\mathbf{x}}_{i}+{\mathbf{e}}_{i}$, where
${\mathbf{e}}_{i}$ is an error vector, then it is obvious that if the conditions

$$\mathbf{r}=\sum {c}_{i}{\mathbf{e}}_{i}$$ | (A.34) |

and

$$\sum {c}_{i}=1$$ | (A.35) |

are satisfied, then the relation

$$\sum {c}_{i}{\mathbf{x}}_{i}={\mathbf{x}}_{f}$$ | (A.36) |

also holds.

The true error vectors ${\mathbf{e}}_{i}$ are, of course, unknown. However, in the case of a nearly quadratic energy function they can be approximated by

$${\mathbf{e}}_{i}=-{\mathbf{H}}^{-1}{\mathbf{g}}_{i}$$ | (A.37) |

where ${\mathbf{g}}_{i}$ is the gradient vector corresponding to the geometry ${\mathbf{x}}_{i}$ and $\mathbf{H}$ is an approximation to the Hessian matrix. Minimization of the norm of the residuum vector $\mathbf{r}$, Eq. (A.34), together with the constraint equation, Eq. (A.35), leads to a system of $m+l$ linear equations

$$\left(\begin{array}{cccccccccccccccccccc}\hfill {B}_{11}\hfill & \hfill \mathrm{\cdots}\hfill & \hfill {B}_{1m}\hfill & \hfill 1\hfill & & & & & & & & & & & & & & & & \\ \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\ddots}\hfill & \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\vdots}\hfill & & & & & & & & & & & & & & & & \\ \hfill {B}_{m1}\hfill & \hfill \mathrm{\cdots}\hfill & \hfill {B}_{mm}\hfill & \hfill 1\hfill & & & & & & & & & & & & & & & & \\ \hfill 1\hfill & \hfill \mathrm{\cdots}\hfill & \hfill 1\hfill & \hfill 0\hfill & & & & & & & & & & & & & & & & \end{array}\right)\left(\begin{array}{cccccccccccccccccccc}\hfill {c}_{1}\hfill & & & & & & & & & & & & & & & & & & & \\ \hfill \mathrm{\vdots}\hfill & & & & & & & & & & & & & & & & & & & \\ \hfill {c}_{m}\hfill & & & & & & & & & & & & & & & & & & & \\ \hfill -\lambda \hfill & & & & & & & & & & & & & & & & & & & \end{array}\right)=\left(\begin{array}{cccccccccccccccccccc}\hfill 0\hfill & & & & & & & & & & & & & & & & & & & \\ \hfill \mathrm{\vdots}\hfill & & & & & & & & & & & & & & & & & & & \\ \hfill 0\hfill & & & & & & & & & & & & & & & & & & & \\ \hfill 1\hfill & & & & & & & & & & & & & & & & & & & \end{array}\right)$$ | (A.38) |

where ${B}_{ij}=\u27e8{\mathbf{e}}_{i}|{\mathbf{e}}_{j}\u27e9$ is the scalar product of the error vectors ${\mathbf{e}}_{i}$ and ${\mathbf{e}}_{j}$, and $\lambda $ is a Lagrange multiplier.

The coefficients ${c}_{i}$ determined from Eq. (A.38) are used to calculate an intermediate interpolated geometry

$${\mathbf{x}}_{m+1}^{\prime}=\sum {c}_{i}{\mathbf{x}}_{i}$$ | (A.39) |

and its corresponding interpolated gradient

$${\mathbf{g}}_{m+1}^{\prime}=\sum {c}_{i}{\mathbf{g}}_{i}$$ | (A.40) |

A new, independent geometry is generated from the interpolated geometry and gradient according to

$${\mathbf{x}}_{m+1}={\mathbf{x}}_{m+1}^{\prime}-{\mathbf{H}}^{-1}{\mathbf{g}}_{m+1}^{\prime}.$$ | (A.41) |

Note:
Convergence is theoretically guaranteed regardless of the quality of the
Hessian matrix (as long as it is positive definite), and the original GDIIS
algorithm used a static Hessian (*i.e.*, the original starting Hessian, often a
simple unit matrix, remained unchanged during the entire optimization).
However, updating the Hessian at each cycle generally results in more rapid
convergence, and this is the default in Optimize.

Other modifications to the original method include limiting the number of previous geometries used in Eq. (A.33) and, subsequently, by neglecting earlier geometries, and eliminating any geometries more than a certain distance from the current geometry (default = 0.3 a.u.).