Coarray Fortran on LRZ HPC systems
How to compile and run Coarray Fortran programs on LRZ HPC systems
Coarrays provide a parallel programming model based on the PGAS concept that was integrated into the Fortran 2008 standard.
Availability on LRZ HPC systems
Coarrays are supported by the following compilers on the LRZ HPC systems:
- Intel's Fortran compiler (ifort)
- GCC Fortran compiler (gfortran)
Both compilers support shared-memory and distributed memory processing for coarray code.
Setting up the environment
This is done by loading an appropriate stack of environment modules:
Intel compiler | GCC gfortran |
module unload mpi.ibm mpi.mpt module load mpi.intel caf/intel |
module unload mpi.intel mpi.ibm mpi.mpt module load gcc/6 mpi.intel/5.1_gcc caf/gfortran or module load gcc/7 mpi.intel/2017_gcc caf/gfortran |
Note that it may be necessary to unload other conflicting compiler or MPI modules before the CAF module stack can be loaded. The above is designed to work with the LRZ default environment. Setting up the environment should be done for both compilation and execution.
Compiling and linking
Compiler and mode | Compiler driver | Compilation switch | Linkage switch |
Intel compiler (shared memory) | ifort / mpif90 | -coarray | -coarray |
Intel compiler (distributed memory) | ifort / mpif90 | -coarray | -coarray=distributed |
GCC gfortran | caf | (none needed) | (none needed) |
The mpif90 driver can be used for Intel Fortran if the coarray code also uses MPI. For GCC gfortran this is not necessary because the driver script already includes the necessary functionality.
Execution of coarray programs
Intel Fortran on a shared memory system
The following sequence of commands executes the program "my_coarray_program.exe" using 8 images that run on a single computational node.
export FOR_COARRAY_NUM_IMAGES=8
./my_coarray_program.exe
Intel Fortran on a distributed memory system
The following sequence of commands executes the program "my_coarray_program.exe" using 8 images that run on a single computational node.
export FOR_COARRAY_CONFIG_FILE=./my_coarray_program.conf
./my_coarray_program.exe
The configuration file my_coarray_program.conf needs to contain a line with at least the following entries:
SLURM jobs | LoadLeveler jobs |
-n <# of images> ./my_coarray_program.exe |
-n <# of images> ./my_coarray_program.exe |
This information is supplied to the MPI startup mechanism. Other Intel MPI options like -genvall or -ppn (number of tasks per node) etc. can also be added, but this is only necessary for specialized setups. The executables' name and its command line arguments are taken from this configuration file, so care must be taken to keep these consistent.
GCC gfortran
The Intel MPI mpiexec command can be used to start up the coarray executable.
Interoperation of MPI/OpenMP and Coarrays
For both compilers, it should be possible to also use MPI calls in coarray programs (with some care to avoid destructive interference between the programming models). There is a one-to-one mapping of MPI ranks and image indices i.e. each MPI rank corresponds to exactly one coarray image.
It is also expected that OpenMP can be used in coarray programs (as long as no coindexed access happens inside threaded regions), particularly if such use is limited to lower-level (serial) libraries. The usual steps for hybrid execution need to be taken to assure proper pinning of both tasks/images and threads.
Coarray Documentation
- Summary of the coarray feature (as covered in the Fortran 2008 standard) by John Reid.
- Andy Vaught's coarray compendium
- Intel Fortran compiler documentation, available from the Intel web site