Update: Online Courses by LRZ and PRACE
We are happy to announce that LRZ and PRACE will offer several online training events. There are still some places available.
Please mind that registration is necessary since the details to access the online courses will be provided to the registered attendees only.
PRACE Workshop: HPC code optimisation workshop
Date: Monday, June 8 - Wednesday, June 10, 2020, 09:00-17:00 CEST
Registration Deadline: May 25, 2020
Lecturers: Momme Allalen (LRZ), Fabio Baruffa (Intel), Gennady Fedorov (Intel), Mathias Gerald (LRZ), Carla Guillen (LRZ), Michael Steyer (Intel), Igor Vorobtsov (Intel)
We will begin with a description of the latest micro-processor architectures and how the developers can efficiently use modern HPC hardware, in particular the vector units via SIMD programming and AVX-512 optimization and the memory hierarchy. The attendees are then conducted along the optimization process by means of hands-on exercises and learn how to enable vectorization using simple pragmas and more effective techniques, like changing data layout and alignment. The work is guided by the hints from the Intel® compiler reports, and using Intel® Advisor. Besides Intel® Advisor, the participants will also be guided to the use of Intel® VTune™ Amplifier, Intel® Application Performance Snapshot and LIKWID as tools for investigating and improving the performance of a HPC application. We further cover the Intel® Math Kernel Library (MKL), in order to show how to gain performance through the use of libraries.
PRACE Workshop: Deep Learning and GPU programming workshop
Date: Monday, June 15 - Thursday, June 18, 2020, 09:00-17:00 CEST (tbc.)
Registration Deadline: June 1, 2020
Lecturers: Dr. Momme Allalen, Dr. Durillo Barrionuevo, Dr. Volker Weinberg (LRZ and NVIDIA University Ambassadors)
Learn how to train and deploy a neural network to solve real-world problems, how to generate effective descriptions of content within images and video clips, how to effectively parallelize training of deep neural networks on Multi-GPUs and how to accelerate your applications with CUDA C/C++ and OpenACC. This 4-days workshop combines lectures about fundamentals of Deep Learning for Multiple Data Types and Multi-GPUs with lectures about Accelerated Computing with CUDA C/C++ and OpenACC. The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud. The workshop is co-organized by LRZ and NVIDIA Deep Learning Institute (DLI) for the Partnership for Advanced Computing in Europe (PRACE).
PRACE Course: Introduction to hybrid programming in HPC
Date: Wednesday, June 17 08:45 - Friday, June 19, 2020 16:00 CEST
Registration Deadline: June 2, 2020
Lecturers: Dr. habil. Georg Hager (RRZE, Uni. Erlangen), Dr. Rolf Rabenseifner (HLRS, Uni. Stuttgart), Dr. Claudia Blaas-Schenner, Dr. Irene Reichl (VSC Research Center, TU Wien)
Most HPC systems are clusters of shared memory nodes. To use such systems efficiently both memory consumption and communication time has to be optimized. Therefore, hybrid programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyses the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. LRZ has joined forces with VSC Vienna and HLRS Stuttgart and will offer this course online as a replacement for the course originally scheduled in April at LRZ.
Intel OneAPI for FPGAs
Date: Friday, June 19, 2020, 13:00-19:00 CEST
Registration Deadline: June 5, 2020
Lecturer: Susannah Martin (Intel)
In this course you will learn to use the Intel® oneAPI Base Toolkit and the Intel® FPGA Add-On for oneAPI Base Toolkit to target an FPGA. We will explore how your Data Parallel C++ (DPC++) source code becomes a custom compute unit, and what resources are utilized in the FPGA to build it. The proper development flow for working with an FPGA will be presented: emulation, interpreting optimization reports, and performance analysis on the FPGA. Finally, you will be introduced to important optimization concepts such as pipelining loop iterations and architecting kernel memory. The course includes lectures, demonstrations and hands-on sessions.
Optimizing OpenCL Programs for Intel FPGAs
Date: hursday, June 25, 2020, 09:00-18:00 CEST
Registration Deadline: June 15, 2020
Lecturer: Marlon Price (Intel)
The course covers various optimization techniques to implement high performance OpenCL™ applications on FPGAs. We'll use various debug & analysis tools available in the Intel® FPGA SDK for OpenCL™ software technology to boost performance of OpenCL kernels. The first half of the lecture focuses on the optimization of single work-item kernels & the utilization of channel constructs & OpenCL kernel pipes. The second half of the lecture focuses on the optimization of NDRange kernels & the effective utilization of FPGA memory resources. Throughout the lecture we will discuss good coding practices for FPGAs & tool features to improve OpenCL kernel performance on FPGAs. The course is offered by Intel in cooperation with LRZ using the Webex Training platform.
PRACE MOOCS (Massive Open Online Courses)
We also want to inform you about the following 2 MOOCS offered by PRACE:
PRACE MOOC: MPI: A Short Introduction to One-sided Communication
Date: Starting on 20th April 2020
Learn the details of one-sided communication in MPI programming. Discover the advantages to one-sided communication in parallel programming. Message Passing Interface (MPI) is a key standard for parallel computing architectures. On this course, you’ll learn the essential concepts of one-sided communication in MPI, as well as the advantages of the MPI communication model.
You’ll learn the details of how exactly MPI works, as well how to use Remote Memory Access (RMA) routines. Examples, exercises, and tests will be used to help you learn and explore.
PRACE MOOC: Python in High Performance Computing
Date: Starting on 27th April 2020
The Python programming language is popular in scientific computing because of the benefits it offers for fast code development. The performance of pure Python programs is often suboptimal, but there are ways to make them faster and more efficient.
On this course, you’ll find out how to identify performance bottlenecks, perform numerical computations efficiently, and extend Python with compiled code. You’ll learn various ways to optimise and parallelise Python programs, particularly in the context of scientific and high performance computing.
Information on further HPC courses:
- by LRZ: http://www.lrz.de/services/compute/courses/
- by HLRS: https://www.hlrs.de/training/ including new ONLINE courses!
- by the Gauss Centre of Supercomputing (GCS): http://www.gauss-centre.eu/training
- by German Centres (collected by the Gauß-Allianz): https://hpc-calendar.gauss-allianz.de/
- by the Partnership for Advanced Computing in Europe (PRACE): http://www.training.prace-ri.eu/