IPM Integrated Performance Monitoring

What is IPM

IPM is a portable profiling infrastructure for parallel codes in C and Fortran. It provides a low-overhead performance profile of the performance aspects and resource utilization in a parallel program. Communication, computation, and IO are the primary focus. 

For more details information please visit the IPM home page.

How to use IPM

compiling and linking

IPM module has to be loaded before compiling the parallel codes. Please use the following command to load the IPM module.

          module load papi ipm

papi is need to get the hardware performance counter.The user code needs to be linked with the option:   $IPM_LIB

For example:

          mpicc -o my_code my_code.c $IPM_LIB
          mpirun -np 4 ./my_code

Hardware counters

IPM provides a method of collecting data from hardware performance counters by using PAPI. Users can define a comma separated list of the desired counters by setting the environment variable IPM_HPM. For example:

         export IPM_HPM=PAPI_FP_OPS,PAPI_TOT_INS,PAPI_L1_DCM,PAPI_L1_DCA

To know the available counters please use the following command.

        module load papi
        papi_avail

User's Code region

Users can define their desired code region in applications by using MPI_Pcontrol. MPI_Pcontrol is a run time routine of IPM. The first argument to MPI_Pcontrol determine what action will be taken by IPM.

Arguments Description
 1, "region name" start code region of the "region name"
-1, "region name" exit code region of the "region name"
 0, "region name" invoke custom event "region name"

Example:

MPI_Pcontrol( 1,"proc_a");    //start code region of proc_a
MPI_Pcontrol(-1,"proc_a");    //exit code region of proc_a

Running IPM

IPM immediately generates a profiling report in XML format. IPM able to provide two different types of reports. Users can define their desired type by setting the environment variable IPM_REPORT. Users can filter the measured data by setting the IPM_MPI_THRESHOLD environment counter.

Variable value Description
IPM_REPORT terse Aggregate wallclock time, memory usage and flops are reported along with the percentage of wallclock time spent in MPI calls
  full (default) Each HPM counter is reported as are all of wallclock, user, system, and MPI time. The contribution of each MPI call to the communication time is given.
  none No report
IPM_MPI_THRESHOLD 0.0 < x < 1.0 Only report MPI routines using more than x% of the total MPI time.

Just run the xecutable likle normal MPI program

         mpiexec ....... my_code

HTML Ouput

IPM also allows users to generate a graphical webpage from the XML output file. Please use the following the command to generate a graphical webpage.

         $IPM_WEB   XML_file_name

Details

  • IPM home page