AI Support

HLRS's user support group includes a dedicated team with expertise in supporting the running and optimization of algorithms for machine learning, deep learning, artificial intelligence, and high-performance data analytics on HLRS's supercomputing architectures. We assist users from a wide range of domains — not only from the sciences and engineering, but also from other fields including business and finance, media, and the visual and performing arts — in getting codes to run efficiently and effectively on HLRS's systems.  Our team has particular expertise in the following areas.

Distributed training and scalability

Large computing systems for AI like HLRS's Hawk and CS-Storm supercomputers gain their power by distributing complex calculations across many parts of a complex computing system. For researchers used to running training algorithms on single GPUs, however, scaling training algorithms using deep neural network approaches up to larger numbers of cores can be challenging. HLRS's user support staff assists our system users in developing AI codes written in languages like TensorFlow or PyTorch to take full advantage of the analytical capabilities of our distributed computing systems.

HLRS staff can also provide general consulting assistance in reviewing users' existing source code to search for opportunities to optimize it for faster results. This can include investigating data pipelines and communications frameworks.

Hybrid workflows: high-performance computing meets AI

High-performance computing (HPC) and AI offer complementary capabilities for computational research — HPC can quickly generate large data sets for training neural networks while AI can quickly extract insights from or perform predictions on large amounts of data — and some of the most exciting computational research happening today is taking place at this intersection. However, because of a variety of issues related to both hardware and software, combining applications from both paradigms in an efficient and integrated manner is very difficult.

Computer scientists at HLRS are conducting research to develop more efficient hybrid workflows that enable users to seamlessly perform and integrate HPC data generation and data analytics. This includes both moving data between HPC- and AI-dedicated compute clusters, and programming workflows for our Hawk supercomputer, which offers both HPC and AI capabilities.

Quantum machine learning and other new applications of AI

Artificial intelligence approaches are rapidly advancing in ways that intersect with other kinds of computing, including Internet of Things frameworks and quantum computing. HLRS user support staff also have relevant experience in these areas and invite users with related projects to contact us.

Dennis Hoppe

Head, Service Management and Business Processes

+49 711 685-60300 dennis.hoppe(at)hlrs.de