TURBOMOLE is a program package for ab initio electronic structure calculations.
Table of contents
Overview of Functionality
TURBOMOLE is a highly optimized software for large-scale quantum chemical simulations of molecules, clusters, and periodic solides. TURBOMOLE consists of a series of modules; and it covers a wide range of research areas, more details can be found on the TURBOMOLE web page.
Usage conditions and Licensing
TURBOMOLE may only be used for academic and teaching purposes.
Running Turbomole at LRZ
General information for using the module environment system at LRZ can be found here. For creating and submitting batch jobs to the respective queueing system, please refer to the documentation of Load-Leveler (SuperMUC) and Linux Cluster on our webpages.
Before running any serial TURBOMOLE program, please load the serial module via:
> module load turbomole
This will adjust the variables
Recent versions of Turbomole is parallelized with HP-MPI. This allows using different types of network interconnects for parallel runs and HP-MPI is supposed to select the fastest one available – but sometimes HP-MPI requires manual intervention to select to correct interconnect on SuperMUC or Linux-Cluster. As parallel runs of Turbomole also require one control process, additional all parallel runs are only considered to run in the batch queuing systems and you need also to configure a ssh-Key according to the description you find here.
Before running any parallel TURBOMOLE program, please load the parallel module via:
> module load turbomole/mpi
This will adjust the variables
$PARNODES will be set to the number of requested cores in the jobscript. If you want to use a (smaller) number of cores, please set
$PARNODES to the number of cores you request, i.e.
> export PARNODES=[nCores]
After login to the system, your batch script for Linux-Cluster or SuperMUC could look like examples below:
#!/bin/bash #SBATCH -o /home/cluster/<group>/<user>/mydir/cp2k.%j.out #SBATCH -D /home/cluster/<group>/<user>/mydir #SBATCH -J <job_name> #SBATCH --clusters=mpp1 #SBATCH --get-user-env #SBATCH --ntasks=32 #SBATCH --mail-type=end #SBATCH --mail-user=<email_address>@<domain> #SBATCH --export=NONE #SBATCH --time=24:00:00 source /etc/profile.d/modules.sh
#!/bin/bash #@ wall_clock_limit = 48:00:00 #@ job_type = MPICH #@ class = test #@ island_count=1 #@ node = 2 #@ tasks_per_node = 16 #@ total_tasks=32 # (on the fat node system, you can use tasks_per_node = up to 40)
More information about how to set up parallel TURBOMOLE programs can be found in the TURBOMOLE documentation (see section "Parallel Runs").
After the TURBOMOLE module is loaded the documentation DOK.ps or DOK.pdf can be found in the directory $TURBOMOLE_DOC. You may also check the TURBOMOLE Forum.
TmoleX provides a graphical user interface for TURBOMOLE starting version 6.0. It features e.g.:
- Import and export of coordinates from/to different formats like xyz, cosmo, sdf, ml2, car, arc
- Graphical visualization of molecular structure, including movies of gradients and vibrational frequencies
- Generation of molecular orbitals and automatic occupation
- Submitting jobs to queuing systems
- Viewing results from Turbomole jobs
For further information (also on the option for a local client installation) check the webpage of COSMOlogic.
To use TmoleX please first load the TURBOMOLE module, after that the Java module
> module load java
and then start TmoleX from command line with
A short introduction on usage of TmoleX may be found under $TMOLEXDOC/Tutorial-tmolex-2-0.pdf.
To submit jobs to the MPP_Myri Cluster please add these lines:
. /etc/profile.d/modules.sh module load turbomole[.mpi]/6.x cd $SGE_O_WORKDIR
If you have any questions or problems with TURBOMOLE installed on LRZ platforms please contact LRZ HPC support.