HPE Apollo (Hawk)

Image f. HPE Apollo (Hawk)

Optimized to provide large-scale computing power for complex simulations, Hawk was the flagship supercomputer of the High-Performance Computing Center Stuttgart from 2020 until early 2025. Its CPU-based architecture includes a GPU partition suitable for high-performance data analytics and artificial intelligence applications, and for hybrid workflows that combine high-performance computing and AI. With a theoretical peak performance of 26 Petaflops, Hawk debuted in 2020 at #16 on the Top500 List.

Hawk was taken out of service in April 2025. Its AI expansion including NVIDIA A100 GPUs remains in operation.

Funding

Funding for the Hawk supercomputer is provided by the German Federal Ministry of Education and Research and the Baden-Württemberg Ministry for Science, Research and Arts through the Gauss Centre for Supercomputing.

 Logo BMBFLogo MWK 

System components

System Type: Hewlett Packard Enterprise Apollo

Number of cabinets 32
Number of compute nodes 4096
System peak performance 26 Petaflops

 

CPU Type: AMD EPYC 7742

CPUs per node 2
Cores per CPU 64
Number of compute cores 524.288
CPU frequency 2.25 GHz
DIMMs in system 65.536
Total system memory ~1 PB

 

GPU system type: Apollo 6500 Gen10 Plus

Number of cabinets 4
Nodes per cabinet 6
Nodes in system 24
CPU Type AMD EPYC
GPUs per node 8
GPU type NVIDIA A100
GPUs in system 192
AI performance: node ~5 PFlops
AI performance: system ~120 PFlops
Node to node interconnect Dual Rail InfiniBand HDR200

 

Data storage and networking

Storage: DDN EXAScaler

Disks in system ~2,400
Capacity per disk 14 TB
Total disk storage capacity ~42 PB

If you are a current HLRS system user, click here to report a problem, check on Hawk's operating status, or find technical documentation or other information related to your project.

For Users

Frontend and service nodes

Rack type: frontend and service nodes Adaptive Rack Cooling System (ARCS)
Racks: frontend and service nodes 5 + 2 ARCS Cooling Towers
Frontend nodes 10 x HPE ProLiant DL385 Gen10
Memory of frontend nodes 5 x 1 TB, 4 x 2 TB, 1 x 4 TB
Data mover nodes 4 x HPE ProLiant DL385 Gen10
Service nodes Red Hat Enterprise Linux 8

 

Node to node interconnect: InfiniBand HDR200

Interconnect topology Enhanced 9D-Hypercube
Interconnect bandwidth 200 Gbit / s
Total InfiniBand cables 3,024
Total cable length ~20 km

 

Power and cooling

Power consumption

Maximum power consumption per rack ~90 kW
Power supplies in system 2,112
System power consumption, normal operation ~3.5 MW
System power consumtion, LinPack operation ~4.1 MW

 

Cooling

Cooling distribution units (CDUs) 6
Water inlet temperature (CDUs) 25°C
Water return temperature (CDUs) 35°C
Volume of cooling liquid in the system ~2.5 m³
Water inlet temperature (ARCS cooling towers) 16°C
Water evaporation by wet cooling towers ~9 m³/h