Parallel execution of Q-Chem can be threaded across multiple processors on a single node, using the OpenMP protocol, or using the message-passing interface (MPI) protocol to parallelize over multiple processor cores and/or multiple compute nodes. A hybrid MPI + OpenMP scheme is also available for certain calculations, in which each MPI process spawns several OpenMP threads. In this hybrid scheme, cross-node communication is handled by the MPI protocol and intra-node communication is done implicitly using OpenMP threading for efficient utilization of shared-memory parallel (SMP) systems. This parallelization strategy reflects current trends towards multi-core architectures in cluster computing.
As of the v. 4.2 release, the OpenMP parallelization is fully supported by HF/DFT, RIMP2, CC, EOM-CC, and ADC methods. The MPI parallel capability is available for SCF, DFT, CIS, and TDDFT methods. The hybrid MPI+OpenMP parallelization is introduced in v. 4.2 for HF/DFT energy and gradient calculations only. Distributed memory MPI+OpenMP parallelization of CC and EOM-CC methods was added in Q-Chem v. 4.3. Table 2.1 summarizes the parallel capabilities of Q-Chem v. 4.4.
Method |
OpenMP |
MPI |
MPI+OpenMP |
HF energy & gradient |
yes |
yes |
yes |
DFT energy & gradient |
yes |
yes |
yes |
CDFT/CDFT-CI |
no |
no |
no |
RI-MP2 energy |
yes |
no |
no |
Attenuated RI-MP2 energy |
yes |
no |
no |
Integral transformation |
yes |
no |
no |
CCMAN & CCMAN2 methods |
yes |
yes |
yes |
ADC methods |
yes |
no |
no |
CIS energy & gradient |
no |
yes |
no |
TDDFT energy & gradient |
no |
yes |
no |
HF & DFT analytical Hessian |
no |
yes |
no |
To run Q-Chem calculation with OpenMP threads specify the number of threads (nthreads) using qchem command option -nt. Since each thread uses one CPU core, you should not specify more threads than the total number of available CPU cores for performance reason. When unspecified, the number of threads defaults to 1 (serial calculation).
qchem -nt nthreads infile outfile qchem -nt nthreads infile outfile save qchem -save -nt nthreads infile outfile save
Similarly, to run parallel calculations with MPI use the option -np to specify the number of MPI processes to be spawned.
qchem -np n infile outfile qchem -np n infile outfile savename qchem -save -np n infile outfile savename
where is the number of processors to use. If the -np switch is not given, Q-Chem will default to running locally on a single node.
To run hybrid MPI+OpenMP HF/DFT calculations use combined options -np and -nt together, where -np followed by the number of MPI processes to be spawned and -nt followed by the number of OpenMP threads used in each MPI process.
qchem -np n -nt nthreads infile outfile qchem -np n -nt nthreads infile outfile savename qchem -save -np n -nt nthreads infile outfile savename
When the additional argument savename is specified, the temporary files for MPI-parallel Q-Chem are stored in $QCSCRATCH/savename.0 At the start of a job, any existing files will be copied into this directory, and on successful completion of the job, be copied to $QCSCRATCH/savename/ for future use. If the job terminates abnormally, the files will not be copied.
To run parallel Q-Chem using a batch scheduler such as PBS, users may need to set QCMPIRUN environment variable to point to the mpirun command used in the system. For further details users should read the $QC/README.Parallel file, and contact Q-Chem if any problems are encountered (support@q-chem.com).