Recent Advances in Parallel Programming Languages

Scientific Workshop @ LRZ, 8.6.2015

As the standards of parallel programming languages are getting more and more complex and extensive, it can be hard to stay up to date with recent developments. We have thus invited leading HPC experts to give updates of recent advances in parallel programming languages.

Languages covered during the workshop are MPI, OpenMP, OpenACC and Coarray Fortran. Most of the speakers are member of the standardisation committee of the language they present.

Participants: (high resolution)

participants-small

Foto: Vasileios Karakasis

Location

Leibniz-Rechenzentrum der Bayerischen Akademie der Wissenschaften (LRZ)
Boltzmannstr. 1 - D-85748 Garching bei Muenchen

Hörsaal (lecture hall) H.E.009

How to get to the LRZ: see http://www.lrz.de/wir/kontakt/weg_en/

Speakers

bader

Dr. Reinhold Bader (LRZ)

has been a member of the scientific staff at Leibniz Computing Centre (LRZ) since 1999. As group leader of the HPC group he is responsible for the operation of the HPC-systems at LRZ. Furthermore he participates in the standardisation activities of the Fortran programming language in the international workgroup WG5.

klemm

Dr.-Ing. Michael Klemm (Intel Corp.)

is part of Intel's Software and Services Group, Developer Relations Division. His focus is on High Performance and Throughput Computing. He obtained an M.Sc. in Computer Science in 2003. Michael received a Doctor of Engineering degree (Dr.-Ing.) in Computer Science from the Friedrich-Alexander-University Erlangen-Nuremberg, Germany, in 2008. His research focus was on compilers and runtime optimisations for distributed systems. Michael's areas of interest include compiler construction, design of programming languages, parallel programming, and performance analysis and tuning. Michael is Intel representative in the OpenMP Language Committee and leads the efforts to develop error handling features for OpenMP.  He is also maintainer of the pyMIC offload infrastructure for Python.

schoenherr

Dr. Mandes Schönherr (Cray Inc.)

received a PhD in computational chemistry from the University of Zurich. Mandes joined Cray Inc. Germany in 2014 as an application analyst and provides on-site application support at HLRS. As part of his job he educated the HLRS and PRACE users in all topics, concerning the performance optimization on the Cray XC system and environment, including OpenACC.

rabenseifner

Dr. Rolf Rabenseifner (HLRS)

is head of Parallel Computing - Training and Application Services at HLRS. In workshops and summer schools he teaches parallel programming models in many universities and labs. Since 1996, he has been a member of the MPI-2 Forum and since Dec. 2007 he is in the steering committee of the MPI-3 Forum and was responsible for the new MPI-2.1 standard and in charge with the development of the new MPI-3 Fortran interface. In January 2012, the Gauss Center of Supercomputing (GCS), with HLRS, LRZ in Garching and the Jülich Supercomputing Center as members, was selected as one of six PRACE Advanced Training Centers (PATCs) and he was appointed as GCS' PATC director.

Tentative Schedule

09:00-09:10 Welcome (Dr. Volker Weinberg, LRZ)

09:10-10:10 Extensions of Coarray Fortran  (Dr. Reinhold Bader, LRZ) 

10:10-10:30 Q&A + Break

10:30-11:30 Past, Present, and Future of OpenMP (Dr.-Ing. Michael Klemm, Intel)

11:30-12:00 Q&A

12:00-13:00 Lunch Break

13:00-14:00 OpenACC (Dr. Mandes Schönherr, Cray)

14:00-14:30 Q&A + Break

14:30-15:30 MPI 3.0/3.1 (Dr. Rolf Rabenseifner, HLRS)

15:30-16:00 Q&A  + Wrap-Up

Organisation & Contact

Dr. Volker Weinberg (LRZ)

Registration

Via the LRZ registration form. Please choose the course HPPL1S15. Participation is free.

Abstracts

  • Extensions of Coarray Fortran

    The talk presents the planned extensions to the parallel syntax and semantics of Fortran within the technical specification TS18508.

    In particular the following topics will be covered in detail:

    – one-sided synchronisation with events
    – collective intrinsic functions
    – atomic intrinsic functions
    – "composable parallelism" via the creation of teams
    – fail-safe execution: concepts for continuation of program execution in the face of partial hardware failures

    It is planned to integrate TS18508 into the next Fortran standard after its adoption.

  • Past, Present, and Future of OpenMP

    OpenMP is one of the most wide-spread parallel programming models for shared-memory platforms in HPC. It’s history dates back to 1997, a time of when parallel programming was machine-dependent and cumbersome. In this presentation, we will briefly revisit the history of OpenMP and present the features of the current OpenMP version 4.0.  We will provide an outlook of OpenMP 4.1 and OpenMP 5.0.

  • OpenACC

    The OpenACC API provides an efficient way to offload intensive calculations from a host to an accelerator device using directives. These directives are used to accelerate loops and regions written in the standard programming languages C, C++ and Fortran and are portable across different operating-systems, host CPU's and a wide variety of accelerators, i.e. APU's, GPU's and many core coprocessors.
    This directive based programming model allows the programmer to develop high level host-accelerator applications without taking care of the initialization of the accelerator, the management of data or program transfers or startup and shutdown of the accelerator explicitly.
    We will provide a short introduction about OpenACC.

  • MPI 3.0/3.1

    MPI-3.0 is a major update to the MPI standard. The updates include the extension of collective operations to include nonblocking versions and sparse and scalable irregular neighbor collectives, a new Fortran 2008 binding, a new tools interface, new routines to handle large counts, and extensions to the one-sided operations, which also adds a new shared memory programming model.

    MPI-3.1 is mainly an errata update to the MPI standard. New functions added include routines to manipulate MPI_Aint values in a portable manner, nonblocking collective I/O routines, and additional routines in the tools interface.

    The talk will give an overview on the new methods and will discuss in more detail the new MPI shared memory programming model.

Slides

pdf Introduction (V. Weinberg, LRZ)

pdf Extensions of Coarray Fortran (R. Bader, LRZ)

pdf Past, Present, and Future of OpenMP (M. Klemm, Intel)

pdfOpenACC (M. Schönherr, Cray)

pdf MPI 3.0/3.1 (R. Rabenseifner, HLRS)

Further courses and workshops @ LRZ

LRZ is part of the Gauss Centre for Supercomputing (GCS), which is one of the six PRACE Advanced Training Centres (PATCs) that started in 2012.

Information on further HPC courses: