

Quantum computing is the word on everybody’s lips – the technology is said to have enormous potential. It’s not just research and development. Industry and business have also discovered this field of technology for themselves and want to harness the expected potential of quantum computers.
Quantum computers are slowly conquering computing centres; algorithms are being (further) developed; programmers are working on software environments and researchers are asking themselves which applications can benefit most from which quantum computers – including in Bavaria, where internationally leading experts in the field have come together to form Munich Quantum Valley (MQV).
With the LRZ Quantum Integration Centre (QIC), we will provide the MQV with long-term support and implement the interfaces for users of Bavarian quantum computers. As an internationally recognised supercomputing centre and IT service provider for science, we at the LRZ can look back on decades of experience and draw on this to ensure reliable operation. Together with partners, we are building the necessary ecosystem and driving forward the training and further education of experts. The basis for these activities involves integrating quantum computing into supercomputing. The integration involves merging with LRZ services and users, providing education and training measures and, last but not least, integrating with high-performance computers.
The LRZ operates several systems based on superconducting qubits from technology partner IQM. The systems at the LRZ have more than 20 qubits and will have as many as 54 to 150 in the future. This technology uses resistance-free currents in superconducting circuits to realise the basic building blocks of a quantum computer, the quantum bits. These currents are relatively robust against external interference and can retain the quantum properties over long periods of time.
The system consists of an ion trap, a laser and camera unit as well as control electronics and works with 20 qubits from electrically charged atoms (ions) whose quantum states are controlled by laser beams. A laser can address any pair in the quantum register as required. Thanks to this full connectivity as well as low error rates during computing, similar or even better results can be achieved than with quantum systems with a higher qubit count but less connectivity. The AQT system does not require an extensive cooling or power supply infrastructure either.
For the “Multicore Atomic Quantum Computing System” (MAQCS) project, the Garching-based start-up planqc is currently developing a quantum system based on neutral atoms that can perform calculations for 1,000 qubits. MAQCS will run until 2027; the quantum computer developed as part of the project the project will be available to selected researchers and integrated into the LRZ’s High-Performance Computer (HPC).
We research the latest computer and storage technologies as well as Internet tools. In collaboration with partners, we develop technologies for future computing, energy-efficient computing and IT security, as well as tools for data analysis and the development of artificial intelligence systems. Here is an overview of our quantum R&D projects.
Future exascale high-performance computers require a new kind of software that allows dynamic workloads to run with maximum energy efficiency on the best-suited hardware available in the system. The Technical University of Munich (TUM) and the Leibniz Supercomputing Centre (LRZ) are working together to create a production-ready software stack to enable low- energy, high-performance exascale systems.
Exascale supercomputers are knocking at the door. They might be a game-changer to the way we design and use high-performance computing (HPC) systems. As exascale performance drives more HPC systems toward heterogenous architectures that mix traditional CPUs with accelerators like GPUs and FPGAs, computational scientists will have to design more dynamic applications and workloads in order to get massive performance increases for their applications.
The question is: How will applications leverage these different technologies efficiently and effectively Power management and dynamic resource allocation will become the most important aspects of this new area of HPC. Stated more simply: how do HPC centres ensure that users are getting the most science per Joule
Optimizing application performance on heterogeneous systems under power and energy constraints poses some challenges. Some are quite sophisticated, like the dynamic phase behaviour of applications. And some are basic hardware issues like the variability of processors: Due to manufacturing limitations, low-power operation of CPUs can cause a wide variety of frequencies across the cores. Adding to these is the ever-growing complexity and heterogeneity on node-level.
A software stack for such heterogeneous exascale systems will have to meet some specific demands. It has to be dynamic, work with highly heterogeneous integrated systems, and adapt to existing hardware. TUM and LRZ are working closely together to build a software stack based on existing and proven solutions. Among others, MPI and its various implementations, SLURM, PMIx or DCDB are well-known parts of this Munich Software Stack.
“The basic stack is already running on the SuperMUC-NG supercomputer at the LRZ”, says Martin Schulz, Chair for Computer Architecture and Parallel Systems at the Technical University of Munich and Director at the Leibniz Supercomputing Centre. “Right now, we are engaged in two European research projects for further development of this stack on more heterogeneous, deeper integrated and dynamic systems, as they will become commonplace in the exascale era: REGALE and DEEP-SEA.” One of the foundations for the next generation of this software stack is the HPC PowerStack[1], an initiative, with TUM as one of the co-founders, for better standardization and homogenization of approaches for power and energy optimized systems.
REGALE aims to define an open architecture, build a prototype system, and incorporate in this system appropriate sophistication in order to equip supercomputing systems with the mechanisms and policies for effective resource utilization and execution of complex applications. DEEP-SEA will deliver the programming environment for future European exascale systems, capable of adapting at all levels of the software stack. While the basic technologies will be implemented and used in DEEP-SEA, the control chain will play a major role in REGALE.
Both projects are focused on making existing codes more dynamic so they can leverage existing accelerators: Many codes today are static and might only be partially ready for more dynamics. This will require some refactoring, and in some cases, complete rewrites of certain parts of the codes. But it will also require novel and elaborate scheduling methods that must be developed by HPC centres themselves. Part of the upcoming research in DEEP-SEA and REGALE will be to find ways to determine where targeted efforts on top of an existing software stack can yield the greatest result. To this end, agile development approaches will play a role: Continuous Integration with elaborate testing and automation are being established on BEAST (Bavarian Energy-, Architecture- and Software-Testbed) at the LRZ, the testbed for the Munich Software Stack.
“Most research in the field of power and energy management today is done site-specific,” Schulz said. “We see little integration of the components; we have a lack of standardized interfaces that work on all layers of the software stack. In the end, this leads to suboptimal performance of the applications and increases the power needed by the system.. With the Munich Software Stack, TUM and LRZ are working on an open, holistic, and scalable approach to an integrated power and energy management in order to get the most out of supercomputers to come.”
Future exascale high-performance computers require a new kind of software that allows dynamic workloads to run with maximum energy efficiency on the best-suited hardware available in the system. The Technical University of Munich (TUM) and the Leibniz Supercomputing Centre (LRZ) are working together to create a production-ready software stack to enable low- energy, high-performance exascale systems.
Exascale supercomputers are knocking at the door. They might be a game-changer to the way we design and use high-performance computing (HPC) systems. As exascale performance drives more HPC systems toward heterogenous architectures that mix traditional CPUs with accelerators like GPUs and FPGAs, computational scientists will have to design more dynamic applications and workloads in order to get massive performance increases for their applications.
The question is: How will applications leverage these different technologies efficiently and effectively Power management and dynamic resource allocation will become the most important aspects of this new area of HPC. Stated more simply: how do HPC centres ensure that users are getting the most science per Joule
Optimizing application performance on heterogeneous systems under power and energy constraints poses some challenges. Some are quite sophisticated, like the dynamic phase behaviour of applications. And some are basic hardware issues like the variability of processors: Due to manufacturing limitations, low-power operation of CPUs can cause a wide variety of frequencies across the cores. Adding to these is the ever-growing complexity and heterogeneity on node-level.
A software stack for such heterogeneous exascale systems will have to meet some specific demands. It has to be dynamic, work with highly heterogeneous integrated systems, and adapt to existing hardware. TUM and LRZ are working closely together to build a software stack based on existing and proven solutions. Among others, MPI and its various implementations, SLURM, PMIx or DCDB are well-known parts of this Munich Software Stack.
“The basic stack is already running on the SuperMUC-NG supercomputer at the LRZ”, says Martin Schulz, Chair for Computer Architecture and Parallel Systems at the Technical University of Munich and Director at the Leibniz Supercomputing Centre. “Right now, we are engaged in two European research projects for further development of this stack on more heterogeneous, deeper integrated and dynamic systems, as they will become commonplace in the exascale era: REGALE and DEEP-SEA.” One of the foundations for the next generation of this software stack is the HPC PowerStack[1], an initiative, with TUM as one of the co-founders, for better standardization and homogenization of approaches for power and energy optimized systems.
REGALE aims to define an open architecture, build a prototype system, and incorporate in this system appropriate sophistication in order to equip supercomputing systems with the mechanisms and policies for effective resource utilization and execution of complex applications. DEEP-SEA will deliver the programming environment for future European exascale systems, capable of adapting at all levels of the software stack. While the basic technologies will be implemented and used in DEEP-SEA, the control chain will play a major role in REGALE.
Both projects are focused on making existing codes more dynamic so they can leverage existing accelerators: Many codes today are static and might only be partially ready for more dynamics. This will require some refactoring, and in some cases, complete rewrites of certain parts of the codes. But it will also require novel and elaborate scheduling methods that must be developed by HPC centres themselves. Part of the upcoming research in DEEP-SEA and REGALE will be to find ways to determine where targeted efforts on top of an existing software stack can yield the greatest result. To this end, agile development approaches will play a role: Continuous Integration with elaborate testing and automation are being established on BEAST (Bavarian Energy-, Architecture- and Software-Testbed) at the LRZ, the testbed for the Munich Software Stack.
“Most research in the field of power and energy management today is done site-specific,” Schulz said. “We see little integration of the components; we have a lack of standardized interfaces that work on all layers of the software stack. In the end, this leads to suboptimal performance of the applications and increases the power needed by the system.. With the Munich Software Stack, TUM and LRZ are working on an open, holistic, and scalable approach to an integrated power and energy management in order to get the most out of supercomputers to come.”
The LRZ supports researchers to take their first steps in quantum computing. Get in touch with our QIC team if you would like to find out more about the hardware resources available. Or also, if you have questions about optimising algorithms. Are you interested in working with us to advance quantum computing? We look forward to speaking with you.