The GDM method (Section 4.5.6) is an extremely effective energy minimizer but it cannot reliably be applied to optimize excited-state orbitals, as such states are typically unstable stationary points in orbital-rotation space. Energy minimization based approaches therefore tend to ‘slip’ from these saddle points to some local minima (often the ground state, a phenomenon often described as ‘variational collapse’).

Diptarka Hait and Martin Head-Gordon have proposed an alternative way to
optimize excited state orbitals, by minimizing the square of the energy
gradient against orbital degrees of freedom.^{Hait:2020b} This energy
gradient should be zero for all stationary points in energy, and thus all such
stationary points are global minima of the squared energy gradient $\mathrm{\Delta}$.
Quasi-Newton methods therefore can reliably converge to the stationary point
closest to the initial guess orbitals by minimizing $\mathrm{\Delta}$, without the risk of
variational collapse. The resulting SGM approach is thus essentially an
extension of GDM that converges to the closest state (*i.e.*, stationary point in
orbital space) to the initial guess, as opposed to the closest energy minimum.
SGM consequently can be used for reliable excited state optimization within a
direct minimization framework, similar to how the MOM algorithm of Section 4.5.10
can be used in conjunction with iterative diagonalization methods like DIIS. Further details about SGM
applying for excited-state orbital optimization can be found in Section 7.7.
Full details of the SGM algorithm are provided in Ref. Hait:2020b.

The use of SGM is controlled by the SCF_ALGORITHM variable in the *$rem* section:

SCF_ALGORITHM

Algorithm used for converging the SCF.

TYPE:

STRING

DEFAULT:

None

OPTIONS:

SGM
SGM_LS
SGM_QLS
for R and U orbitals only

RECOMMENDATION:

SGM should be used for RO and or OS_RO orbitals only.
SGM_LS is recommended for R or U orbitals, though it can also be used for RO and OS_RO orbitals.
SGM_QLS is a slower, but more robust option for R and U calculations.

DELTA_GRADIENT_SCALE

Scales the gradient of $\mathrm{\Delta}$ by $N$/100, which can be useful for cases with troublesome convergence by reducing step size.

TYPE:

INTEGER

DEFAULT:

100

OPTIONS:

$N$

RECOMMENDATION:

Use default. For problematic cases 50, 25, 10 or even 1 could be useful.