HPE Apollo (Hawk)

Next-Generation HPC System @ HLRS

Hawk is the flagship supercomputer of the High-Performance Computing Center Stuttgart (HLRS). At time of installation, it among the fastest high-performance computers in the world and the fastest general-purpose machine for industrial production in Europe.

Designed with the needs of HLRS's users in mind, Hawk is optimized for engineering simulation applications. In concert with our other platforms for high-performance data analytics and artificial intelligence, Hawk enables HLRS to support new kinds of workflows that combine data generation, simulation, big data analysis, deep learning, and other data science methods.

System Type: Hewlett Packard Enterprise Apollo

Number of cabinets44
Number of compute nodes5,632
System peak performance26 Petaflops

CPU Type: AMD EPYC™ 7742

CPUs per node2
Cores per CPU64
Number of compute cores720,896
CPU frequency2.25 GHz
DIMMs in system90,112
Total system memory~ 1.44 PB

Node to Node Interconnect: InfiniBand HDR200

Interconnect topologyEnhanced 9D-Hypercube
Interconnect bandwidth200 Gbit/s
Total InfiniBand cables3,024
Total cable-length~20 km

System Type - Apollo 6500 Gen10 Plus

Number of cabinets4
Nodes per cabinet6
Nodes in system24
GPUs per node8
GPUs in system192
AI performance – node ~5 PFlops
AI performance – system ~120 Pflops
Node to node interconnectDual Rail InfiniBand HDR200

Storage: DDN EXAScaler with IME

Disks in system~2,400
Capacity per disk14 TB
Total disk storage capacity~25 PB
Burst buffer typeDDN Infinite Memory Engine (IME)
Burst buffer capacity~660 TB

Power Consumption

Maximum power consumption per rack~90 kW
Power supplies in system2,112
System power consumption - normal operation~3.5 MW
System power consumption - LinPack operation~4.1 MW

Frontend- And Service-Nodes

Rack type: frontend and service nodes Adaptive Rack Cooling System (ARCS)
Racks: frontend and service nodes5 + 2 ARCS Cooling Towers
Frontend nodes10 x HPE ProLiant DL385 Gen10
Memory of frontend nodes5 x 1 TB, 4 x 2 TB, 1 x 4 TB
Data mover nodes4 x HPE ProLiant DL385 Gen10
Service nodesRed Hat Enterprise Linux 8


Cooling distribution units (CDUs)6
Water inlet temperature (CDUs)25°C
Water return temperature (CDUs)35°C
Volume of cooling liquid in the system~2.5 m³
Water inlet temperature (ARCS cooling towers)16°C
Water evaporation by wet cooling towers~9 m³/h
HPE Apollo (Hawk)
(Copyright: Ben Derzian for HLRS)
HPE Apollo (Hawk) (Copyright: Ben Derzian for HLRS)

HPE Apollo Hawk (fisheye view)
(Copyright: Ben Derzian for HLRS)
HPE Apollo Hawk (fisheye view) (Copyright: Ben Derzian for HLRS)


Users looking for the full list of technical documentation, please visit here.

If you are experiencing technical problems or need assistance troubleshooting, please submit a ticket here.