Leibniz Supercomputing Centre Accelerates AI Innovation in Bavaria with Next-Generation AI System from Cerebras Systems and Hewlett Packard Enterprise
The Wafer Scale-Engine 2 from Cerebras Systems - currently the largest chip worldwide (Foto: Cerebras Systems)
Leibniz Supercomputing Centre’s new advanced AI system will enable researchers to accelerate initiatives around machine learning, deep learning and neural networks and to process large amounts of data more quickly for advanced scientific research using the combined power of the Cerebras CS-2 system and the HPE Superdome Flex server
Garching, Germany – May 25, 2022 – The Leibniz Supercomputing Centre (LRZ), Cerebras Systems, and Hewlett Packard Enterprise (HPE), today announced the joint development and delivery of a new system featuring next-generation AI technologies to significantly accelerate scientific research and innovation in AI for Bavaria. The new system is funded by the Free State of Bavaria through the Hightech Agenda, a program dedicated to strengthening the tech ecosystem in Bavaria to fuel the region’s mission to becoming an international AI hotspot. The new system is also an additional resource to Germany’s national supercomputing computing center, and part of LRZ’s Future Computing Program that represents a portfolio of heterogenous computing architectures across CPUs, GPUs, FPGSs and ASICs.
Empowering Bavaria’s scientific community to speed discovery and make breakthroughs
The new system is expected for delivery this summer and will be hosted at LRZ, an institute of the Bavarian Academy of Sciences and Humanities (BAdW).The system will be used by local scientific and engineering communities, to support various research use cases. Some identified applications include Natural Language Processing (NLP), medical image processing involving innovative algorithms to analyze medical images, or computer-aided capabilities to accelerate diagnoses and prognosis, and computational fluid dynamics (CFD) to advance understanding in areas such as aerospace engineering and manufacturing.
Delivering next-generation AI with scalable and accelerated compute features
The new system is purpose-built to process large datasets to tackle complex scientific research. The system is comprised of the HPE Superdome Flex server and the Cerebras CS-2 system, which makes it the first solution in Europe to leverage the Cerebras CS-2 system. The HPE Superdome Flex server delivers a modular, scale-out solution to meet computing demands and features specialized capabilities to target large, in-memory processing required to process vast volumes of data. Additionally, the HPE Superdome Flex server’s specific pre-and post-data processing capabilities for AI model training and inference is ideal to support the Cerebras CS-2 system, which delivers the deep learning performance of 100s of graphics processing units (GPUs), with the programming ease of a single node. Powered by the largest processor ever built – the Cerebras Wafer-Scale Engine 2 (WSE-2) which is 56 times larger than the nearest competitor – the CS-2 delivers greater AI-optimized compute cores, faster memory, and more fabric bandwidth than any other deep learning processor in existence.
"Currently, we observe that AI compute demand is doubling every three to four months with our users. With the high integration of processors, memory and on-board networks on a single chip, Cerebras enables high performance and speed. This promises significantly more efficiency in data processing and thus faster breakthrough of scientific findings," says Prof. Dr. Dieter Kranzlmüller, Director of the LRZ. "As an academic computing and national supercomputing centre, we provide researchers with advanced and reliable IT services for their science. To ensure optimal use of the system, we will work closely with our users and our partners Cerebras and HPE to identify ideal use cases in the community and to help achieve groundbreaking results."
Cerebras CS-2 delivers the largest AI chip with 850,000 computing cores
AI methods and machine learning need computing power. Currently, the complexity of neural networks used to analyze large volumes of data is doubling in a matter of months. However, such applications have so far been run primarily on general purpose or graphics processors (CPU and GPU).
"We founded Cerebras to revolutionize compute," said Andrew Feldman, CEO and Co-Founder of Cerebras Systems. "We’re proud to partner with LRZ and HPE to give Bavaria’s researchers access to blazing fast AI, enabling them to try new hypotheses, train large language models and ultimately advance scientific discovery."
The Cerebras WSE-2 is 46,225 square millimeters of silicon, housing 2.6 trillion transistors and 850,000 AI-optimized computational cores as well as evenly distributed memory that hold up to 40 gigabytes of data and fast interconnects to transport them across the disk at 220 petabytes per second. This allows the WSE-2 to keep all the parameters of multi-layered neural networks on one chip during execution, which in turn reduces computation time and data processing. To date, the CS-2 system is being used in a number of U.S. research facilities and enterprises and is proving particularly effective in image and pattern recognition and natural language processing (NLP). Additional efficiency is also provided by water cooling, which reduces power consumption.
Offering a powerful system and software for AI development
To support the Cerebras CS-2 system, the HPE Superdome Flex server provides large-memory capabilities and unprecedented compute scalability to process the massive, data-intensive machine learning projects that the Cerebra’ CS-2 system targets. The HPE Superdome Flex server also manages and schedules jobs according to AI application needs, enables cloud access, and stages larger research datasets. In addition, the HPE Superdome Flex server includes a software stack with programs to build AI procedures and models.
“We are excited to extend our collaboration with Leibniz Supercomputing Centre (LRZ) by supplying next generation computing technology to its scientific community,” said Justin Hotard, executive vice president and general manager, HPC & AI, at HPE. “Through our work with LRZ and Cerebras, we are pleased to support the next wave of scientific and engineering innovation in Germany. As AI and machine learning become more prevalent and we move into the age of insight, highly optimized systems such as LRZ’s new system will accelerate scientific breakthroughs for the good of humanity.”
In addition to AI workloads, the combined technologies from HPE and Cerebras will also be considered for more traditional HPC workloads in support of larger, memory-intensive modeling and simulation needs.
"The future of computing is becoming more complex, with systems becoming more heterogeneous and tuned to specific applications. We should stop thinking in terms of HPC or AI systems," says Laura Schulz, Head of Strategy at LRZ. "AI methods work on CPU-based systems like SuperMUC-NG, and conversely, high-performance computing algorithms can achieve performance gains on systems like Cerebras. We’re working towards a future where the underlying compute is complex, but doesn’t impact the user; that the technology–whether HPC, AI or quantum–is available and approachable for our researchers in pursuit of their scientific discovery.”