CPMD: ab initio molecular dynamics

CPMD (Car-Parrinello Molecular Dynamics) is a program for ab initio molecular dynamics. It uses a density functional method to speed up quantum chemical computations.

General Information

Overview

CPMD (Car-Parrinello Molecular Dynamics) is a program for ab initio molecular dynamics. It uses a density functional method to speed up quantum chemical computations. For further details and documentation, please consult the CPMD Home page.

Installed Versions

Versions:

 3.17.1

Operatingsystem:

Linux

Machines:

SuperMUC (IBM), Linux-Cluster (MPP, UV, and ICE)

Licensor

The CPMD consortium
©by IBM Corp and MPI Stuttgart.
Email: cpmd@cpmd.org
WWW: http://www.cpmd.org/

Author

Prof. Dr. Michele Parrinello
Computational Science
Department of Chemistry and Applied Biosciences - ETH Zurich
USI - Campus, via Giuseppe Buffi 13
CH - 6900 Lugano

Licensing

CPMD, the quantum chemical molecular dynamics program of Prof. Car and Prof. Parrinello, is available at LRZ. However, since an individual license is required to use this program, it is not generally available. To gain access to CPMD you must prove to the LRZ that you have a license to use CPMD. You must obtain this license from the download site: http://cpmd.org/download/cpmd-licence.

Access and Execution

Please contact the LRZ HPC support staff via e-mail, once you have a valid license to obtain access to CPMD at LRZ. Once you have been validated to use CPMD, please load the appropriate module environment just after login to the system via:

% module load cpmd

Serial and parallel interactive execution

The interactive execution of CPMD can be done with:

% cpmd inputfilename >& outputfilename.log
% cpmd -n [N] inputfilename >& outputfilename.log (N: number of processors)

Running serial and parallel CPMD batch jobs

 

To use CPMD in batch mode, please find below example job scripts for all the available platforms:

 

Linux-Cluster (SLURM)

SuperMUC (LoadLeveler)

#!/bin/bash
#SBATCH -o /home/hpc/<group>/<user>/cpmd.%j.out
#SBATCH -D /home/hpc/<group>/<user>/mydir
#SBATCH -J <job_name>
#SBATCH --clusters=mpp1
### or ice1 (for ICE) and uv2,uv3 (for UV)
#SBATCH --get-user-env #SBATCH --ntasks=32
#SBATCH --mail-type=end
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --export=NONE
#SBATCH --time=24:00:00
source /etc/profile.d/modules.sh
module load cpmd 
export PPLIB=$CPMD_PP_LIBRARY_PATH
cpmd -n 32 < inp.files >& out.log
#!/bin/bash
#@ wall_clock_limit = 24:00:00
#@ job_type = MPICH
#@ class = micro
#@ node = 10
#@ tasks_per_node = 16
#@ total_tasks= 160
#@ network.MPI = sn_all,not_shared,us
#@ island_count=1,2
#@ notification=always
#@ notify_user=youremail@yoursite.xx
#@ initialdir = $(home)/mydir
#@ output = job$(jobid).out
#@ error = job$(jobid).err
#@ queue
. /etc/profile
. /etc/profile.d/modules.sh            
module load cpmd
export PPLIB=$CPMD_PP_LIBRARY_PATH
cpmd -n 160 < inp.files >& out.log

Then submit the job script using sbatch (Linux-Cluster) or llsubmit (SuperMUC) commands, e.g., (assume the job script name is name-job.sh)

% sbatch  name-job.sh (for Linux-Cluster)

For LoadLeveler (SuperMUC) with

% llsubmit  name-job.sh

There are many other parameters to the batch system at LRZ. For more details see Batch Queuing and Job Policies

Example of strong scaling on HLRB-2 (Altix-4700)

The used input represent a simulation of a real system, water in a box. The performance results up to 508 cores dataset are plotted in Figure below.

128mol_h2o

Support

If you have any questions regaring CPMD at LRZ, please don't hesitate to conatct the LRZ HPC support staff.

You can also pose questions directly in the CPMD mailing list: http://www.cpmd.org/