TURBOMOLE

TURBOMOLE is a program package for ab initio electronic structure calculations.

General Information

Overview of Functionality

TURBOMOLE is a highly optimized software for large-scale quantum chemical simulations of molecules, clusters, and periodic solides. TURBOMOLE consists of a series of modules; and it covers a wide range of research areas, more details can be found on the TURBOMOLE web page.

Usage conditions and Licensing

TURBOMOLE may only be used for academic and teaching purposes.

Running Turbomole at LRZ

General information for using the module environment system at LRZ can be found here. For creating and submitting batch jobs to the respective queueing system, please refer to the documentation of Load-Leveler (SuperMUC) and Linux Cluster on our webpages.

Serial Version

Before running any serial TURBOMOLE program, please load the serial module via:

  > module load turbomole  


This will adjust the variables $TURBODIR and $PATH.

Parallel Version

Recent versions of Turbomole is parallelized with HP-MPI. This allows using different types of network interconnects for parallel runs and HP-MPI is supposed to select the fastest one available – but sometimes HP-MPI requires manual intervention to select to correct interconnect on SuperMUC or Linux-Cluster. As parallel runs of Turbomole also require one control process, additional all parallel runs are only considered to run in the batch queuing systems and you need also to configure a ssh-Key according to the description you find here.

Before running any parallel TURBOMOLE program, please load the parallel module via:

  > module load turbomole/mpi  

This will adjust the variables $TURBODIR, $PATH, $PARA_ARCH and $PARNODES.

$PARNODES will be set to the number of requested cores in the jobscript. If you want to use a (smaller) number of cores, please set $PARNODES to the number of cores you request, i.e.

   > export PARNODES=[nCores]

After login to the system, your batch script for Linux-Cluster or SuperMUC could look like examples below:

MPP-Cluster (SLURM)

SuperMUC (LoadLeveler)

#!/bin/bash
#SBATCH -o /home/cluster/<group>/<user>/mydir/cp2k.%j.out
#SBATCH -D /home/cluster/<group>/<user>/mydir
#SBATCH -J <job_name>
#SBATCH --clusters=mpp1
#SBATCH --get-user-env
#SBATCH --ntasks=32
#SBATCH --mail-type=end
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --export=NONE
#SBATCH --time=24:00:00
source /etc/profile.d/modules.sh
 
module load turbomole/mpi/7.1
export MYDIR=<link to your input files>
cd $MYDIR
export PARA_ARCH=MPI
export TM_MPI_ROOT=$MPI_BASE
rm -f $HOSTS_FILE
export HOSTS_FILE=$MYDIR/turbomole.machines
for i in `scontrol show hostname $SLURM_NODELIST`; do
  for j in $(seq 1 $SLURM_TASKS_PER_NODE); do echo $i >> turbomole.machines; done
done
export CORES=`wc -l < $HOSTS_FILE`
export PARNODES=$CORES
### or use: export PARNODES=$SLURM_NPROCS
##execute with:
echo dscf > $MYDIR/dscf.out
dscf > $MYDIR/dscf.out
ricc2  > $MYDIR/ricc2.out
# or example
jobex -ri
#!/bin/bash
#@ wall_clock_limit = 48:00:00
#@ job_type = MPICH
#@ class = test
#@ island_count=1
#@ node = 2
#@ tasks_per_node = 16
#@ total_tasks=32
# (on the fat node system, you can use tasks_per_node = up to 40)
#@ network.MPI = sn_all,not_shared,us
#@ energy_policy_tag = my_energy_tag
#@ minimize_time_to_solution = yes
#@ initialdir = $(home)/<link to your input files> #@ output = job$(jobid).out #@ error = job$(jobid).err #@ notification=always #@ notify_user=your-email@xyz.de #@ queue . /etc/profile
. /etc/profile.d/modules.sh
module load turbomole/mpi/6.6
export MYDIR=<link to your input files>
export WKD=$SCRATCH/turbowork.$LOADL_HOSTFILE
mkdir -p $WKD || exit 1
cd $MYDIR || exit 2
cp -r ./* $WKD
cd $WKD
rm *.out *.err
cat $LOADL_HOSTFILE > ./host.list
export PARNODES=`cat $LOADL_HOSTFILE | wc -l`
echo dscf > $MYDIR/dscf.out
dscf > $MYDIR/dscf.out
ricc2  > $MYDIR/ricc2.out
cp * $MYDIR
cd $MYDIR
rm -rf $WKD

Several example script files for different cases on our systems can be found for Linux-Cluster here and also for SuperMUC.

More information about how to set up parallel TURBOMOLE programs can be found in the TURBOMOLE documentation (see section "Parallel Runs").

Documentation

After the TURBOMOLE module is loaded the documentation DOK.ps or DOK.pdf can be found in the directory $TURBOMOLE_DOC. You may also check the TURBOMOLE Forum.

TmoleX

TmoleX provides a graphical user interface for TURBOMOLE starting version 6.0. It features e.g.:

  • Import and export of coordinates from/to different formats like xyz, cosmo, sdf, ml2, car, arc 
  • Graphical visualization of molecular structure, including movies of gradients and vibrational frequencies
  • Generation of molecular orbitals and automatic occupation 
  • Submitting jobs to queuing systems
  • Viewing results from Turbomole jobs

For further information (also on the option for a local client installation) check the webpage of COSMOlogic.

To use TmoleX please first load the TURBOMOLE module, after that the Java module

   > module load java

and then start TmoleX from command line with

   > TmoleX

A short introduction on usage of TmoleX may be found under $TMOLEXDOC/Tutorial-tmolex-2-0.pdf.

To submit jobs to the MPP_Myri Cluster please add these lines:

   . /etc/profile.d/modules.sh 
   module load turbomole[.mpi]/6.x
   cd $SGE_O_WORKDIR

Support

If you have any questions or problems with TURBOMOLE installed on LRZ platforms please contact LRZ HPC support.