X

Search Results

Searching....

9.1 Equilibrium Geometries and Transition-State Structures with Q-Chem

9.1.5 Constrained Optimization

(July 14, 2022)

Constrained optimization refers to the optimization of molecular structures in which certain parameters (e.g., bond lengths, bond angles or dihedral angles) are fixed. In quantum chemistry calculations, this has traditionally been accomplished using Z-matrix coordinates, with the desired parameter set in the Z-matrix and simply omitted from the optimization space. In 1992, Baker presented an algorithm for constrained optimization directly in Cartesian coordinates. 55 Baker J.
J. Comput. Chem.
(1992), 13, pp. 240.
Link
Baker’s algorithm used both penalty functions and the classical method of Lagrange multipliers, and was developed in order to impose constraints on a molecule obtained from a graphical model builder as a set of Cartesian coordinates. Some improvements widening the range of constraints that could be handled were made in 1993. 51 Baker J., Bergeron D.
J. Comput. Chem.
(1993), 14, pp. 1339.
Link
Q-Chem includes the latest version of this algorithm, which has been modified to handle constraints directly in delocalized internal coordinates. 56 Baker J.
J. Comput. Chem.
(1997), 18, pp. 1079.
Link

The essential problem in constrained optimization is to minimize a function of n variables F(𝐱) subject to a series of m constraints of the form Ci(𝐱)=0 for i=,,m. Assuming m<n, then perhaps the best way to proceed (if this were possible in practice) would be to use the m constraint equations to eliminate m variables, and then solve the resulting unconstrained problem in terms of the n-m independent variables. This is exactly what occurs in a Z-matrix optimization. Such an approach cannot be used in Cartesian coordinates as standard distance and angle constraints are non-linear functions of the appropriate coordinates. For example, a distance constraint between atoms i and j is, in Cartesians, given by (Rij-R0)=0, with

Rij=[(xi-xj)2+(yi-yj)2+(zi-zj)2]1/2 (9.17)

and R0 is the constrained distance. This obviously cannot be satisfied by elimination. What can be eliminated in Cartesians are the individual x, y and z coordinates themselves and in this way individual atoms can be totally or partially frozen.

Internal constraints can be handled in Cartesian coordinates by introducing the Lagrangian function

L(𝐱,λ)=F(𝐱)-i=1mλiCi(𝐱) (9.18)

which replaces the function F(𝐱) in the unconstrained case. Here, the λi are the so-called Lagrange multipliers, one for each constraint Ci(𝐱). Differentiating Eq. (9.18) with respect to 𝐱 and λ affords

dL(𝐱,λ)dxj = dF(𝐱)dxj-i=1mλi(dCi(𝐱)dxj) (9.19)
dL(𝐱,λ)dλi = -Ci(𝐱) (9.20)

At a stationary point of the Lagrangian we have ^𝐋=𝟎, i.e., all dL/dxj=0 and all dL/dλi=0. This latter condition means that all Ci(𝐱)=0 and thus all constraints are satisfied. Hence, finding a set of values (𝐱, λ) for which ^𝐋=𝟎 will give a possible solution to the constrained optimization problem in exactly the same way as finding an 𝐱 for which 𝐠=^𝐅=𝟎 gives a solution to the corresponding unconstrained problem.

The Lagrangian second derivative matrix, which is the analogue of the Hessian matrix in an unconstrained optimization, is given by

^2𝐋=(d2L(𝐱,λ)dxjdxkd2L(𝐱,λ)dxjdλid2L(𝐱,λ)dxjdλid2L(𝐱,λ)dλjdλi) (9.21)

where

d2L(𝐱,λ)dxjdxk =d2F(𝐱)dxjdxk-i=1mλi(d2Ci(𝐱)dxjdxk) (9.22)
d2L(𝐱,λ)dxjdλi =-(dCi(𝐱)dxj) (9.23)
d2L(𝐱,λ)dλjdλi =0. (9.24)

Thus, in addition to the standard gradient vector and Hessian matrix for the unconstrained function F(𝐱), we need both the first and second derivatives (with respect to coordinate displacement) of the constraint functions. Once these quantities are available, the corresponding Lagrangian gradient, (Eq. (9.19)), and Lagrangian second derivative matrix, (Eq. (9.21)), can be formed, and the optimization step calculated in a similar manner to that for a standard unconstrained optimization. 55 Baker J.
J. Comput. Chem.
(1992), 13, pp. 240.
Link

In the Lagrange multiplier method, the unknown multipliers, λi, are an integral part of the parameter set. This means that the optimization space consists of all n variables 𝐱 plus all m Lagrange multipliers λ, one for each constraint. The total dimension of the constrained optimization problem, n+m, has thus increased by m compared to the corresponding unconstrained case. The Lagrangian Hessian matrix, ^2𝐋, has m extra modes compared to the standard (unconstrained) Hessian matrix, ^2𝐅. What normally happens is that these additional modes are dominated by the constraints (i.e., their largest components correspond to the constraint Lagrange multipliers) and they have negative curvature (a negative Hessian eigenvalue). This is perhaps not surprising when one realizes that any motion in the parameter space that breaks the constraints is likely to lower the energy.

Compared to a standard unconstrained minimization, where a stationary point is sought at which the Hessian matrix has all positive eigenvalues, in the constrained problem we are looking for a stationary point of the Lagrangian function at which the Lagrangian Hessian matrix has as many negative eigenvalues as there are constraints (i.e., we are looking for an mth-order saddle point). For further details and practical applications of constrained optimization using Lagrange multipliers in Cartesian coordinates; see Ref.  55 Baker J.
J. Comput. Chem.
(1992), 13, pp. 240.
Link
.

Eigenvector following can be implemented in a constrained optimization in a similar way to the unconstrained case. Considering a constrained minimization with m constraints, then Eq. (9.11) is replaced by

(b1F1𝟎𝟎bmFmF1Fm0)(h1hm1)=λp(h1hm1) (9.25)

and Eq. (9.12) by

(bm+1Fm+1𝟎𝟎bm+nFm+nFm+1Fm+n0)(hm+1hm+n1)=λn(hm+1hm+n1) (9.26)

where bi are now the eigenvalues of ^2𝐋, with corresponding eigenvectors 𝐮i, and Fi=𝐮it^𝐋. Here Eq. (9.25) includes the m constraint modes along which a negative Lagrangian Hessian eigenvalue is required, and Eq. (9.26) includes all the other modes.

Equations (9.25) and (9.26) implement eigenvector following for a constrained minimization. Constrained transition state searches can be carried out by selecting one extra mode to be maximized in addition to the m constraint modes, i.e., by searching for a saddle point of the Lagrangian function of order m+.

It should be realized that, in the Lagrange multiplier method, the desired constraints are only satisfied at convergence, and not necessarily at intermediate geometries. The Lagrange multipliers are part of the optimization space; they vary just as any other geometrical parameter and, consequently the degree to which the constraints are satisfied changes from cycle to cycle, approaching 100% satisfied near convergence. One advantage this brings is that, unlike in a standard Z-matrix approach, constraints do not have to be satisfied in the starting geometry.

Imposed constraints can normally be satisfied to very high accuracy, 10-6 or better. However, problems can arise for both bond and dihedral angle constraints near 0 and 180 and, instead of attempting to impose a single constraint, it is better to split angle constraints near these limiting values into two by using a dummy atom, 51 Baker J., Bergeron D.
J. Comput. Chem.
(1993), 14, pp. 1339.
Link
exactly analogous to splitting a 180 bond angle into two 90 angles in a Z-matrix.

Note:  Exact 0 and 180 single angle constraints cannot be imposed, as the corresponding constraint normals, ^𝐂i, are zero, and result in rows and columns of zeros in the Lagrangian Hessian matrix.