One important difference between supercomputers and conventional desktop computers is their ability to perform complex calculations in parallel. Typically, algorithms with many steps are programmed in such a way that they are broken into smaller components that can be run simultaneously on separate compute nodes and then put back together to deliver final results. Doing this efficiently, however, requires that hardware be properly configured and that software be written in ways that best exploit this distributed approach.
One additional challenge is that different types of operations running within a calculation can require varying amounts of computing resources. To make high-performance computing (HPC) systems run most efficiently, these operations need to be managed to maintain a consistent usage of resources for the duration of the calculation (for example, in using an HPC system's core capabilities efficiently during vectorization). Within the HPC field this is known as sustained simulation performance.
On October 10–11, 2017, HPC system users and builders met at the High-Performance Computing Center Stuttgart (HLRS) to discuss current challenges in sustained simulation performance and some recent innovations that are making this goal more achievable. Organized by HLRS in cooperation with Tohoku University and technology company NEC, the 26th Workshop for Sustained Simulation Performance once again brought together European, Russian, US, and Japanese investigators—as well as representatives from industry—to facilitate the exchange of new ideas among the different communities. The workshop has met twice annually since 2004, taking place in the spring in Japan and in autumn in Germany each year.
From hardware, to software, to scientific applications
The conference began with talks focusing on next-generation supercomputers currently being developed in Germany and Japan, focusing particularly on hardware improvements meant to address challenges in sustained simulation performance. A representative of NEC also introduced a new processor the company has designed for high-performance computing.
The remainder of the workshop featured presentations describing new approaches for programming HPC systems. Speakers discussed strategies for optimizing complex algorithms for parallel computing structures, reducing memory requirements, accelerating input/output rates for application data, and improving sustained simulation performance on cloud computing platforms, in agent based systems, and in visualization environments, among other topics.
Many of the presentations also discussed applications of these computer science methods in the context of research fields such as computational fluid dynamics, structural mechanics, aerodynamics, multiphysics, physics of the human body, automatic program generation for simulation, and the social sciences.
As HPC systems continue to grow in power, researchers are able to gather increasingly large amounts of data and use new analytical methods that offer better insight and greater precision. These advancements offer opportunities for research in a variety of areas, but also present new challenges, such as ensuring that applications reach their full potential by being both scalable and portable to different computing systems. In this context, the Sustained Simulation Workshop is helping to lay the theoretical and practical foundations for the future of supercomputing.
Workshop program and proceedings
Proceedings from the 26th Workshop for Sustained Simulation Performance will be published by Springer in 2018. In the meantime, the workshop program and abstracts can be found here:
A book gathering the proceedings from last year's workshop is available here:
— Christopher Williams