Images from the Organism

Arm2

A view from above into the human forearm: The visualisation can be used in many different ways and shows a wide variety of perspectives.
immersive virtual or mixed reality applications. Figure: LRZ/CompBioMed

This is a crux in medicine: doctors cannot simply look at whether and how organs function or blood flows freely through the veins. Their knowledge is usually based on experience and on static short images provided by computer tomographs, X-rays and other diagnostic equipment. This is to change in the future: supercomputers are to simulate and visualise a living human being, the Virtual Human, on the basis of measured values, data and images. "The concept can be described as a digital representation of all biophysical processes of a human being and is created on the basis of conventional imaging methods," CompBioMed says.

For the international project, around 20 research institutes and supercomputing centres have been working on the virtual human since 2016 and are also developing techniques and software for the digitisation of medicine and pharmacology. Considerable milestones have now been achieved in both fields with the help of the Leibniz Supercomputing Centre (LRZ) in Garching: In a closed collaboration a research group led by the British physician and computer scientist Prof. Dr. Peter Coveney succeeded in using SuperMUC-NG to simulate the blood flow in the veins and arteries of a human forearm: "The challenge was to model the data required for this and than visualise it," says Elisabeth Mayer, specialist of the Centre for Virtual Reality and Visualisation (V2C) at LRZ. "This creates large amounts of data that only a supercomputer can process."

Observing and understanding blood flow

The basis for the inside view of a forearm is provided by an open Lattice-Boltzmann code called HemeLB, which was developed at CompBioMed for three-dimensional modelling of blood flow and has now been optimised and adapted at the LRZ to create 80 per cent scaling efficiency on the SuperMUC-NG. HemeLB simulated more than 230 million data points and produced 64 time snapshots to represent the pressure with which the blood is pumped through the veins of the forearm during a heartbeat. For each step and data point, roughly 7 gigabytes of information were created, making a total of 470 Gigabytes or 0,47 terabytes. "HemeLB generates complex, large volume and sparse data because the blood vessels only occupy a small fraction of the space inside the body", Mayer explains. You can think of it as a three-dimensional ship-sinking game: Heme LB models the location of veins and arteries by dividing the space into many cubes, but veins are thin tubes that only pass through very few of all the cubes that make up the forearm. This makes evaluation and visualisation more difficult.

Arm1

A two-dimensional view of the forearm: In white and grey: the veins; in red: the arteries.
Figure: LRZ/CompBioMed

The values calculated in this way were processed with the open-source graphics softwarev OSPRay from the Intel oneAPI Rendering Toolkit. But lifting data from one software to the next does not work smoothly on a notebook, and in supercomputing it is always a challenge: "With the ideas and support of LRZ specialists and the biomedical scientists from CompBioMed, Intel has developed a plug-in so that HemeLB data can be loaded directly into OSPray Studio," reports Salvatore Cielo from the LRZ's Computational X Support-Team. Together, the group also worked out further tools and workflows that significantly simplify and accelerate the interaction of HemeLB with graphics software. "Through our workflow, there is now an efficient process for visualisation and no necessary intermediate storage of large amounts of data," Mayer tells us. With this tool kit, blood flow can now also be visualised in other parts of the body - CompBioMed and the visualisation specialists at the LRZ are already working on the so-called Circle of Willis or the Circulus Arterius Cerebri, a vascular ring that supplies the entire brain with blood. Instead of the current 7, more than 1500 gigabytes have to be processed per time step - SuperMUC-NG will be working for a long time.

The visualisation tools and the impressive images of the blood flow are already attracting the attention of the HPC community: The work was submitted for the "SC Scientific Visualization & Data Analytics Showcase", a competition at the HPC conference Supercomputing 2021, immediately landed among the best six visualisations in the world and has a good chance of winning the competition this year. "One advantage of our workflow is that any images and media can be created from it and exported," says expert Mayer. "Anything is possible - a graphic, a short video clip, even a cinema film could be made from it or a three-dimensional virtual reality application." Just imagine - doctors could dive into a person's body, they would better understand bodily functions. If once the simulations and visualisations are also filled with individual patient data, treatment methods and operations can be better planned in advance.

Accelerating the search for active agents

Large amounts of data are also handled by supercomputers in the search for new active substances for drugs. In various research projects on COVID-19, CompBioMed algorithms clarified how the virus reproduces - and where it is particularly vulnerable. With this preliminary work, the researchers were able to more quickly detect substances that bind to the virus: a groundbreaking result that can shorten the usual development times for marketable drugs and vaccines. To speed up the search for substances, CompBioMed researchers coupled machine learning with known molecular dynamics simulation techniques in a multi-step process. In this way, SuperMUC-NG was able to use thousands of ESMACS and TIES calculations to predict how strongly organic and inorganic substances interact with four spike proteins of the corona virus. To do this, the HPC and CXS team optimised existing software, and also developed management tools to ensure that they utilise as many of SuperMUC-NG's 311,040 compute nodes as possible. With the computational results, artificial neural networks in turn were trained for the screening of active substances. And with each additional combined analysis and computation step, the results became more precise and were also available more quickly.

In a very short time, SuperMUC-NG in Garching and his colleague Summit opf the Oak Ridge National Laboratory in. USA screened billions of compounds that can interact with the target proteins of SARS-Cov2. In parallel, workflows for implementing the HPC software for Molecular Dynamics - including Gromacs, NAMD, AMBER and OpenMM - and the tools for training smart neural networks were built. RADICAL cybertools were also used to create middleware that connects software, databases and smart systems. "This hybrid approach of machine learning and simulation," summarises a recent paper, "has the potential to deliver new pandemic drugs at pandemic speed." Another step that moves pharmaceutical research and medicine towards the future. (vs)