HPE Apollo (Hawk)

Image f. HPE Apollo (Hawk)

Hawk is the flagship supercomputer of the High-Performance Computing Center Stuttgart and one of Europe's fastest computing systems. Designed with the needs of HLRS's users in mind, Hawk is optimized to provide large-scale computing power for complex simulations. Its CPU-based architecture also includes a GPU partition that is suitable for high-performance data analytics and artificial intelligence applications, and for hybrid workflows that combine high-performance computing and AI.

With a theoretical peak performance of 26 Petaflops, Hawk debuted in 2020 at #16 on the Top500 List.

Funding

Funding for HLRS's Hawk supercomputer is provided by the Baden-Württemberg Ministry for Science, Research, and the Arts and the German Federal Ministry of Education and Research through the Gauss Centre for Supercomputing.

 Logo MWKLogo BMBF 

System components

System Type: Hewlett Packard Enterprise Apollo

Number of cabinets 44
Number of compute nodes 5,632
System peak performance 26 Petaflops

 

CPU Type: AMD EPYC 7742

CPUs per node 2
Cores per CPU 64
Number of compute cores 720,896
CPU frequency 2.25 GHz
DIMMs in system 90,112
Total system memory ~1.44 PB

 

GPU system type: Apollo 6500 Gen10 Plus

Number of cabinets 4
Nodes per cabinet 6
Nodes in system 24
CPU Type AMD EPYC
GPUs per node 8
GPU type NVIDIA A100
GPUs in system 192
AI performance: node ~5 PFlops
AI performance: system ~120 PFlops
Node to node interconnect Dual Rail InfiniBand HDR200

 

Data storage and networking

Storage: DDN EXAScaler

Disks in system ~2,400
Capacity per disk 14 TB
Total disk storage capacity ~42 PB

If you are a current HLRS system user, click here to report a problem, check on Hawk's operating status, or find technical documentation or other information related to your project.

For Users

Frontend and service nodes

Rack type: frontend and service nodes Adaptive Rack Cooling System (ARCS)
Racks: frontend and service nodes 5 + 2 ARCS Cooling Towers
Frontend nodes 10 x HPE ProLiant DL385 Gen10
Memory of frontend nodes 5 x 1 TB, 4 x 2 TB, 1 x 4 TB
Data mover nodes 4 x HPE ProLiant DL385 Gen10
Service nodes Red Hat Enterprise Linux 8

 

Node to node interconnect: InfiniBand HDR200

Interconnect topology Enhanced 9D-Hypercube
Interconnect bandwidth 200 Gbit / s
Total InfiniBand cables 3,024
Total cable length ~20 km

 

Power and cooling

Power consumption

Maximum power consumption per rack ~90 kW
Power supplies in system 2,112
System power consumption, normal operation ~3.5 MW
System power consumtion, LinPack operation ~4.1 MW

 

Cooling

Cooling distribution units (CDUs) 6
Water inlet temperature (CDUs) 25°C
Water return temperature (CDUs) 35°C
Volume of cooling liquid in the system ~2.5 m³
Water inlet temperature (ARCS cooling towers) 16°C
Water evaporation by wet cooling towers ~9 m³/h

 

Related news

All news

HPC Helps Identify New, Cleaner Source for White Light

Hawk Upgrade for Artificial Intelligence Now in Operation

Hawk Expansion Will Accelerate Research Combining Simulation and Artificial Intelligence

Hawk Is Fastest CPU-Only Supercomputer in Europe, According to HPCG Benchmark