Scheduled to go into service in the second half of 2026, HammerHAI is an AI-optimized supercomputer that will be hosted at the High-Performance Computing Center Stuttgart in conjunction with the AI Factory HammerHAI. With up to 15 exaflops of peak AI inference performance, the system will offer a powerful platform for medium- to large-scale workloads involving artificial intelligence, machine learning, and data science methods. It incorporates a cloud-native software stack that is familiar to the AI community, making it straightforward to migrate or scale applications from local systems or commercial cloud environments. At the same time, HammerHAI offers a sovereign European alternative to commercial AI service providers that is based in Germany and operated in accordance with EU data security regulations. Following its installation, HammerHAI will be included in HLRS’s certification under the ISO 27001 standard for information security management systems.
HammerHAI is designed to provide world-class AI capabilities for European industry, small and medium-sized enterprises, startups, and scientific research, particularly in the engineering, manufacturing, automotive and mobility sectors. It is the first standalone, AI-optimized supercomputer in Europe to be procured by the EuroHPC Joint Undertaking under its AI Factories initiative.
For more information about the HammerHAI AI Factory and how to gain access to the HammerHAI supercomputer, visit https://www.hammerhai.eu.
HPE
| 215 NVIDIA GB200 NVL-4 nodes (860 GPUs) | Each node contains: - 2 NVIDIA Grace CPU (72 cores) - 4 NVIDIA B200 Blackwell GPUs - 744 GB HBM3e high-bandwidth memory - 940 GB LPDDR5X low-power memory - 30 TB NVMe local storage |
| Axelera AI nodes (9 AIPUs) | Each node contains one Axelera AI processing unit (AIPU) (Delivery expected in 2027) |
| Custom nodes | 13 NVIDIA L4 Tensor Core CPUs (24 GB memory) 5 NVIDIA RTX A1000 large memory nodes (4 TB memory) |
10 PB VAST Data DASE
NVIDIA Quantum-X800 InfiniBand (800 Gbit/s)
NVIDIA Spectrum X (800 Gigabit Ethernet)
15 exaflops of peak AI inference performance
Operating system: Ubuntu 24.04
HPE Morpheus Enterprise cloud management platform
SLURM for node-level isolation and large-scale training jobs
Kubernetes for dynamic workloads
HammerHAI has received funding from the European High Performance Computing Joint Undertaking under grant agreement No. 101234027. This project is co-funded by the European Commission, the German Federal Ministry of Research, Technology and Space (BMFTR), the Baden-Württemberg Ministry of Science, Research and the Arts, the Bavarian State Ministry of Science and the Arts and the Lower Saxony Ministry of Science and Culture.