HLRS, alongside its partners in the Gauss Centre for Supercomputing (GCS), is pleased to be participating in ISC High Performance 2021 Digital, Europe's largest convention for the high-performance computing (HPC) community. Due to the ongoing COVID-19 pandemic, the conference will take place virtually. Members of HLRS will both participate in the official conference program and hold presentations alongside the official program as WebEx Conferences to highlight many of the center's current activities. (See below for a full program of HLRS's presentations.)
GCS Virtual Conference Booth Additional information about HLRS — and about its GCS partners, the Leibniz Centre for Supercomputing and Jülich Supercomputing Centre — is available at a virtual conference booth created by the Gauss Centre for Supercomputing. In addition to the conference program, you can find links to information about other recent developments at the centers: Visit the Gauss Centre for Supercomputing Virtual Conference Website.
Please join HLRS for the following events. Access to the WebEx presentations is available at the links below and the broadcasts will begin shortly before the designated times. You may also register for HLRS's presentations in advance to receive calendar invitations for presentations that are of interest to you. Click here to register for notifications.
Please note that all times are Central European Time.
Monday, June 28
Parallel I/O is an essential part of scientific applications running on high performance computing systems. Typically, parallel I/O stacks offer many parameters which need to be tuned in order to achieve the best possible I/O performance. Unfortunately, there is no default best configuration of parameters; in practice these differ not only between systems, but often also from one application use-case to the other. However, scientific users do often not have the time nor the experience to explore the parameter space sensibly and choose the right configuration for each application use-case. In this study, an auto-tuning approach based on predictive modeling which can find a good set of I/O parameter values on a given system and application use-case is proposed. The feasibility to auto-tune parameters related to the Lustre filesystem and the MPI-IO ROMIO library transparently to the user is demonstrated. In particular, the model predicts for a given I/O pattern the best configuration from a history of I/O usages. The model has been validated with two I/O benchmarks, namely IOR and MPI-Tile-IO, and a real Molecular Dynamics code, namely ls1 Mardyn. (Visit the ISC website at the link above for a complete abstract.)
Tuesday, June 29
This talk will provide an overview about EuroHPC activities and other efforts to realize the European HPC strategy. The main emphasis will be on the implementation of National Competence Centres for high-performance computing in 33 European nations, and their strategic alignment on a European Level.
Paving the Way Towards Exascale (Dessoky)
Engineering applications will be among the first to reach the exascale level, not only in academia but also in industry, with the field of industrial engineering holding highest exascale potential. For this reason, the EU-funded EXCELLERAT project brings together European experts to establish a Centre of Excellence (CoE) in Engineering Applications of HPC with a broad service portfolio, paving the way for the evolution towards exascale. The aim is to solve highly complex and costly engineering problems and create enhanced technological solutions. EXCELLERAT is focusing on the development of six reference applications: Nek5000, Alya, AVBP, TPLS, FeniCS, and Coda. They were analysed for their potential to achieve exascale performance in HPC for engineering, and thus are promising candidates to act as showcases for the evolution of applications towards execution on high-scale demonstrators, pre-exascale systems, and exascale machines.
Enabling High-Performance Computing for Industry through a Data Exchange & Workflow Portal (Luithardt)
Nowadays, organisations and smaller industrial partners face challenges while dealing with HPC calculations, HPC in general or even the access to HPC resources. In many cases, calculations are too complex and potential users do not have the required expert knowledge to fully benefit from HPC technologies without support. This is the challenge that SSC has taken on within EXCELLERAT. The developed Data Exchange & Workflow Portal will be able to simplify or even eliminate these obstacles. First activities with HLRS have already started. The new platform enables users to easily access the two HLRS clusters, Hawk and Vulcan, from any authorised device and to run their simulations remotely.
The Gauss Centre for Supercomputing has helped in many ways in addressing the COVID-19 pandemic. In one project, HLRS worked together with researchers at the German Federal Institute for Population Research (Bundesinstitut für Bevölkerungsforschung) to develop and implement a tool that automatically forecasts demand for intensive care units across the federal states and the German NUTS-2 regions up to eight weeks into the future. This tool, based on a spatial, age-structured microsimulation model, was announced in a publication on medRxiv, and code was made publicly available to enable national and sub-national forecasts of ICU demand in other nations and regions. This talk will provide an overview of the model and how it can help to manage COVID-related stresses on healthcare systems.
The presentation will look at the ambitions for industrialists using Exascale computing, drawing on examples from EXCELLERAT and other initiatives and outlining some of the technical challenges that we experience and foresee. In addition I will comment on whether Exsascale can be a realistic opportunity for SMEs.
FF4EuroHPC is a European R&D project funded by the European Commission as part of the EuroHPC Joint Undertaking programme. The key concept behind FF4EuroHPC is to demonstrate to European SMEs ways to optimize their performance with the use of advanced HPC services (e.g., modelling & simulation, data analytics, machine-learning and AI, and possibly combinations thereof) and thereby take advantage of these innovative ICT solutions for business benefit. FF4EuroHPC builds on the prior Fortissimo and Fortissimo 2 projects. The project has organized two open calls aimed at creating two tranches of application experiments that address business challenges that SMEs from diverse industry sectors face. This presentation will give an introduction to the project and the Fortissimo approach, and then explain the opportunities and expectations for participants in the second open call.
Wednesday, June 30
Digital twins of cities and regions can help us understand and solve today’s complex problems and assess risks and scenarios for the future. However, today's process of the digital transformation in urban planning marks a new revolutionary era that poses highly complex, socially relevant, spatial and temporal challenges to cities, planners, and society. The Smart City approach must become more accessible to all stakeholders. Digital twins, visualised in VR and AR ("Virtual Twins") can help to tackle these challenges and support participatory and collaborative processes for a more democratic urban development.
The Supercomputing Academy is a continuing education program that enables working professionals to strengthen their knowledge of the application, management and programming of supercomputers for simulation. Structured in a blended learning format, courses are designed to address the needs of participants with a range of HPC experience and interests. The program also offers certifications for HPC users, developers, and administrators, including the highest certification of HPC expert.
Thursday, July 1
Modern and future numerical simulations require high-resolution models with an almost unlimited number of unknowns. In many cases, solving the underlying large sparse linear systems with up to billions of the unknowns, takes up the most time and consumes a lot of energy. Working together in the SEQUOIA project with the Fraunhofer Institute for Industrial Engineering (Fraunhofer IAO) and five additional partners, HLRS is investigating the capability of known quantum algorithms from the field of linear algebra, such as HHL and quantum phase estimation algorithm, to significantly reduce the complexity of solving large sparse linear systems on HPC systems. While quantum computers are currently unable to solve real-world numerical problems, experiments on the existing machines such as the IBM Q One system in Ehningen will show us which next steps are more promising for the adoption of quantum computing in HPC. This presentation will present the first results of our small numerical experiments. In particular, the challenges of the quantum computer approach in HPC domain will be addressed, namely the consequences of Noisy Intermediate-Scale Quantum (NISQ) technology.
The growth of artificial intelligence (AI) is accelerating. AI has left research and innovation labs, and nowadays plays a significant role in everyday lives. The impact on society is graspable: autonomous driving cars produced by Tesla, voice assistants such as Siri, and AI systems that beat renowned champions in board games like Go. All of these advancements are facilitated by powerful computing infrastructures based on HPC and advanced AI-specific hardware, as well as highly-optimized AI codes. For several years, HLRS has been engaged in big data and AI-specific activities around HPC. This talk will offer a brief overview about our research project CATALYST — aimed at engaging both researchers and SMEs — and present exciting case studies from our customers.
Friday, July 2
Over the last few years, machine learning (ML) — and in particular deep learning (DL) — has become an important research topic in the high-performance computing (HPC) community. This comes along with new users and data intensive applications on HPC systems, which increasingly affects the design and operation of compute infrastructures. HPC environment and resources on the one hand provide opportunities to attack ML/DL problems not tractable otherwise. On the other hand, the ML/DL community is just getting started to utilize the performance of HPC, leaving out many opportunities for better parallelization and scalability. The intent of this workshop is to bring together researchers and practitioners from all communities to discuss three key topics in the context of high-performance computing and ML/DL methods: parallelization and scaling of ML/DL algorithms, ML/DL applications on HPC systems, and HPC systems design and optimization for ML/DL workloads.