Introduction to PGAS for HPC
|Date:||October 21-22, 2014 (starting at 9:00 am)|
|Location:||LRZ Building, Garching/Munich, Boltzmannstr. 1|
PGAS Introduction to HPC
In this tutorial we present an asynchronous dataflow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI.
GASPI: Global Address Space Programming Interface
GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behavior GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also http://www.gaspi.de and http://www.gpi-site.com).
GASPI is successfully used in academic and industrial simulation applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.
Coarray Fortran and UPC
Both coarrays (defined in the Fortran standard) and UPC (provided as a language extension) provide parallel language features that are based on the concept of the Partitioned Global Address Space. The main difference to GASPI is the tighter integration with the regular language semantics, especially the type system, which results in improved ease of use for some programming tasks. This course provides an introduction to both language extensions and includes a hands-on session during which participants can explore the concepts.
GPI-Space: High-Performance Computing Technology for data intense parallel applications
The modern HPC programmer faces the complexity of massive parallel computers and a growing diversity of hardware architectures. He finds himself programming increasingly more complex communication routines to orchestrate the data flow and the work load of his application, starting from scratch for every new application.
GPI-Space is a tool developed by Fraunhofer ITWM to separate the world of the domain specific knowledge from the world of computer science. The data flow is organized in form of a workflow using a high level description language. A workflow represents a Petri-Net of states and transitions. So-called data token are manipulated by those transitions as they migrate from one state to the next. To define these transitions the domain specific expert provides the calculation routines in a programming language of his choice. This makes GPI-Space a powerful and convenient tool for the parallelization of new applications, as well as for legacy code. GPI-Space is comprised of three building blocks: a workflow engine, a distributed runtime system and a virtual memory layer. The workflow engine provides features like dynamic load balancing, overlap of communication and computation and rescheduling in case of faulting transitions. Arbitrary application patterns are supported and not limited to a single pattern like e.g. Map&Reduce. Further GPI-Space is not restricted to batch processing only but supports processing on live data streams or any combination of both. The virtual memory layer forms a Partitioned Global Address Space (PGAS) allowing to store data in memory and providing highly efficient inter node communication routines based on the Global Address Space Programming Interface (GPI-2). This course will give an introduction into GPI-Space. It is held by GPI-Space experts from Fraunhofer ITWM and is targeted at professional developers as well as students. Participants will get an introduction into the basics of GPI-Space and its different components in an interactive way including live demos and hands-on examples. After the course they will have a good understanding of how GPI-Space can increase performance and efficiency of their own applications.
Find further information: GPI-Space: www.gpi-space.com
|Prerequisites||Basic knowledge of parallel computation and programming.|
|Teachers:||Reinhold Bader (LRZ), Ferdinand Jamitzky (LRZ), external Lecturers from Fraunhofer ITWM|
|Registration:||Available via LRZ registration form (Please choose course HPGA1W14)|