Table of contents
The program package Wien2k performs electronic structure calculations of solids using density functional theory (DFT). It is based on the full-potential (linearized) augmented plane-waves ((L)APW) + local orbitals (lo) method, one among the most accurate schemes for band structure calculations. Wien2k is an all-electron scheme including relativistic effects. Wien2k was developed at the Institute for Materials Chemistry at the Technical University, Vienna and is available on LRZ platforms with both serial and parallel versions.
Setup and access for Wien2k
The module package controls access to the software. Please load the appropriate module environment first after login to the system, in order to use the latest Wien2k version 11.1 or include the line in your script:
module load wien2k
To see where the Wien2k executables reside (such as the bin directory) and what environment variables are added to the PATH, type
module show wien2k
Running Wien2k at LRZ
You can run Wien2k on the LRZ platforms using a script driven job and you can perform small test runs using an interactive shell. Sample scripts for the different platforms are provided below which you can modify according to your needs. Here you can find further scripts. Note that for parallel execution you need a ".machines" file in your working directory and to set the "-p" switch. Fine grained MPI parallel version of the programs lapw0, lapw1, and lapw2 are available.
A ".machines" file template is provided by Wien2k and is accessible here:
If the ".machines" file does not exist, or if the "-p" switch is omitted, the serial versions of the programs are executed.
Linux-Cluster (SLURM): ICE, UV, and MPP
Start an interactive shell with:
salloc --ntasks=32 --partition=mpp1_inter
(ice1_inter for ICE, uv1_inter for UV)
module load wien2k
loads the Wien2k environment
Now modify your ".machines" file and start your calculation, e.g. with:
run_lapw -p -NI
Sample job script files for Linux-Cluster and SuperMUC
#!/bin/bash #SBATCH -o /home/cluster/group/userID/mydir/job%j.out #SBATCH -D /home/cluster/group/userID/mydir/ #SBATCH -J jobname #SBATCH --time=01:00:00 #SBATCH --ntasks=32 #SBATCH --get-user-env #SBATCH --clusters=mpp1 %ice1 for ICE, uv1 for UV #SBATCH --export=NONE #SBATCH --mail-type=end #SBATCH --mail-user=name@domain source /etc/profile.d/modules.sh # load the wien2k module module load wien2k #change to working directory cd $OPT_TMP/mydirexport SCRATCH=./TMP_DIR=$SCRATCH/case # supermuc cd $TMP_DIR rm -fr .machines # for 32 cpus and kpoints (in input file) nproc=32 #write .machines file echo '#' .machines # example for an MPI parallel lapw0 echo 'lapw0:'`hostname`' :'$nproc >> .machines # k-point and mpi parallel lapw1/2 echo '1:'`hostname`':3' >> .machines echo '1:'`hostname`':3' >> .machines echo '1:'`hostname`':3' >> .machines echo '1:'`hostname`':3' >> .machines echo '1:'`hostname`':3' >> .machines echo '1:'`hostname`':3' >> .machines echo '1:'`hostname`':3' >> .machines echo '1:'`hostname`':3' >> .machines echo '1:'`hostname`':3' >> .machines echo '1:'`hostname`':3' >> .machines echo '1:'`hostname`':2' >> .machines echo 'granularity:1' >>.machines echo 'extrafine:1' >>.machines run_lapw -cc 0.0001 -i 50 -it
#!/bin/bash #@ job_type = parallel #@ environment = COPY_ALL
Then submit the job script using sbatch (SLURM) or llsubmit (loadLeveler) commands.
Job control using SLURM
For more information concerning job submission using SLURM, see also here.
Check status of submitted jobs of user:
squeue -u user-ID
Job control using LoadLeveler
For more information concerning job submission using LoadLeveler, see also here.
Check status of submitted jobs of user
llq -u user-ID
Cancel submitted job
The Wien2k Users Manual is particularly useful.
The Wien2k Textbooks website has a detailed description of the electronic structure methods available in Wien2k, and a set of technical notes.
For trouble shooting the keyword search in the Wien2k mailing list is very helpful.
If you have any questions or problems with Wien2k on the different LRZ platforms, please don't hesitate to contact LRZ HPC support staff .