mpiP: Lightweight, Scalable MPI Profiling


mpiP is a lightweight profiling library for MPI applications. Because it only collects statistical information about MPI functions, mpiP generates considerably less overhead and much less data than tracing tools. All the information captured by mpiP is task-local. It only uses communication during report generation, typically at the end of the run, to merge results from all of the tasks into one output file. Using mpiP is very simple. Because it gathers MPI information through the MPI profiling layer, mpiP is a link-time library.

Installation on LRZ HPC platforms

MPI variants that can be used are: IBM PE, Intel MPI, sgi MPT and OpenMPI. Please note that OpenMPI is not fully supported at LRZ.

mpiP ReleasePlatforms


SuperMUC, MPP Cluster, ICE Cluster and UV Cluster


Before using mpiP, it is necessary to load the appropriate environment module:

module add mpip

after loading the MPI module that should be used, and recompile adding the mpiP library. It is recommended to also add  a debugging option in order to decode the performance counter to a source code filename and line number automatically.

mpicc -g -O -o foo.exe  foo.c $MPIP_LIB

Application profiling

Run your application as usual, using the mpiexec command or similar (e.g. poe, srun) for startup. Also consider setting the MPIP environment variable appropriately for more precise control of how the report is generated (please consult the documentation linked below for details).

You can verify that mpiP is working by identifying the header and trailer in standard out.

mpiP: mpiP V3.4 (Build ...)
mpiP: Direct questions and errors to
mpiP: Storing mpiP output in [./foo.exe.4.8483.1.mpiP].

By default, the output file is written to the current directory of the application. mpiP files are always much smaller than trace files, so writing them to this directory is safe.

Further Documentation

mpiP Homepage