HLRS researchers do the math for future energy savings


08 June 2017

Computational fluid dynamics (CFD) simulations are both energy and time intensive. However, for future industrial applications, CFD simulations are crucial for reducing costs, time-to-market, and increasing product innovation. Within the scope of the EU research project ‘ExaFLOW’, HLRS researchers Björn Dick and Dr. Jing Zhang are rising to the challenge of developing energy efficient algorithms and easing the I/O bottleneck.

Modern high performance computers (HPC) have seen a steady growth in computation speed for the last 10 years and now head towards exascale computing performance—a thousand-fold speedup over current petascale machines. However, data transfer rates are currently not able to keep up with this rapid hardware development. More specifically, despite the theoretical capability of producing and processing high amounts of data quickly, the overall performance is oftentimes restricted by how fast a system can transfer and store the computed data, not to mention the high energy demand for handling big datasets. Nevertheless, high-performance CFD simulations are increasingly gaining importance for industrial implementation, as they allow for lower costs during the research and development process and offer amplified optimization possibilities. For a business, this means better products reaching the market faster. To this end, HLRS—together with seven more partners from Europe— started the ‘ExaFLOW’ project with the aim of addressing key algorithmic challenges in CFD to enable simulation at exascale, guided by a number of use cases of industrial relevance, and to provide open-source pilot implementations. The role of HLRS in this project is to workout solutions for two crucial exascale bottlenecks: data volume and energy reduction.

HLRS contributes with energy efficient algorithms and I/O optimization

Principal investigator Jing Zhang is convinced of the importance of ‘ExaFLOW’ project. “CFD simulations are among the most important application areas in HPC and crucial for future progress,” she said. “But in order to reach our development goals, the data volume is the main influencing variable we need to play with.” Within the ExaFLOW project, her main task is to apply the data analysis strategy of singular value decomposition (SVD) on the dataset and research the effect on data volume. SVD is used as a method to decompose a single matrix into several matrices with varying characteristics and subsequently reducing the dimensionality of the data. However, this data-reduction approach would lead to a slight loss of accuracy, so the challenge is to find the right balance between data reduction and loss of accuracy in the I/O process. On the other hand, HLRS researcher Björn Dick is responsible for studying energy efficiency of applications running in the ‘ExaFLOW’ project. Studies show that moderately changing the CPU clock frequency of nodes—the frequency at which nodes are actively calculating—can result in significant savings in energy-to-solution without excessively increased time to solution. In contrast, if CPU clock frequency is highly reduced, time to solution is significantly extended. So researchers are not just trying to improve efficiency, but rather find the right balance between energy to solution and time to solution.

Researchers and entrepreneurs interested in exascale CFD are cordially invited to visit the workshop,“Interdisciplinary Challenges Towards Exascale Fluid Dynamics,” organized by the ExaFLOW project team. The workshop will take place within the scope of the ISC High Performance Conference, June 22 at the Marriott Hotel in Frankfurt, Germany. For further information regarding the workshop and registration click here.

— Lena Bühler