High-Performance Computing Center Stuttgart

Strength through Cooperation: An Interview with Steve Conway

Photo portrait of Steve Conway, with geometric pattern in background.
A respected HPC industry analyst, Steve Conway has for many years consulted with HLRS to gain insights into the trends driving the field of high-performance computing.

The respected HPC analyst discusses HLRS's leading role in supporting industry and why he finds developments in European HPC over the last decade so exciting.

Steve Conway is a veteran in the field of high-performance computing and artificial intelligence. As a market analyst at IDC, Hyperion Research, and Intersect360 Research, he has spent 40 years consulting with leaders across the international HPC landscape, publishing insights into key trends in the field and recommendations for exploiting the opportunities that HPC and AI could offer. 

Conway has also had a long relationship with HLRS, drawing on its expertise in his efforts to forecast HPC's future. He recently visited Stuttgart, where we spoke with him about the growth of HPC capabilities in Europe, how scientific and industrial uses of HPC complement one another, the rise of artificial intelligence, and current challenges that Europe faces in its effort to achieve digital sovereignty.

How did you first encounter HLRS and what has been your relationship with the center over the years? 

In my early days as an analyst of the high-performance computing industry, I quickly learned that HLRS was one of only three or four HPC centers in the world that was seriously and successfully working with companies, and I wanted to learn more about it. Around 1999 I was working for the HPC industry analysis firm IDC when the United States government asked us to start a user group that was not tied to a specific hardware vendor. We wanted to include international users and needed to find places that were appropriate for holding conferences, and every couple of years we would meet in Stuttgart. In 2010, the European Commission asked us to prepare a first-ever Europe-wide HPC strategy, and I led its preparation. HLRS's director, Michael Resch, was one of six reviewers of the report. The Commission disseminated the European HPC strategy paper in 2012 and in 2014 they asked us to measure progress, so we did another massive study. Again, HLRS was a very important source, and since then has continued to be one.

In November 2025 you published an article in HPCWire in which you named the rise of European high-performance computing as the most exciting development you have witnessed in your career. Why do you feel this way?

Governments have for a long time recognized HPC as something that is important for scientific research. At some point they also saw its value for industrial research. They hadn't made the leap to the natural next conclusion, however, which is that high-performance computing is also important for economic competitiveness and GDP growth. After we worked on the second study for the EU, I read the proposal that ultimately went to the European Parliament for funding. What I found remarkable was that while they hardly talked about science, and just a little bit about industry, they spoke a lot about economic competitiveness. I thought that was really smart. For the first time, they were talking the funders' language, and it made a gigantic difference. 

You also wrote about the impact of PRACE and, more recently, the EuroHPC Joint Undertaking in implementing this pan-European HPC strategy.

PRACE started as a collaboration among four countries, and very early they did something that nobody realized would turn out to be so powerful: They characterized European HPC centers according to their supercomputing capabilities. There were national Tier 0 centers — like HLRS — and other centers were categorized as Tier 1 and Tier 2.  This made it possible to start thinking on a European scale. This scheme has persisted, and has even been adopted in other countries like Australia. 

In our 2014 study of European progress in HPC, we said that if Europe wanted to be globally important in this field, it would have to be prepared to buy a couple of exascale computers. At the time there was no way to do that, though. We recommended increasing how much the European Commission could contribute to purchasing a large supercomputer from 20% to 50%, and changing the rules to allow member states to collaborate economically. All of a sudden the EuroHPC Joint Undertaking had the tools it needed. This decision enabled multiple member states to work closely with each other, and to propose large supercomputers as a team. This was extremely important for Europe, because traditionally the big six economies had controlled the supercomputer scene, creating a division between wealthy and less wealthy countries. The changes implemented by the JU went a long way towards solving the rich/poor, north/south, east/west problem that had plagued European HPC for years. 

As you know, HLRS has been managing the projects EuroCC and CASTIEL, which established a Europe-wide network of national competency centers for HPC and AI. The projects have promoted collaboration and the adoption of best practices across all NCCs. How do you see the impact of these initiatives? 

Once the national competence centers were selected, it was suddenly clear that competencies were extremely different in different places. By bringing countries across Europe together to coordinate and share expertise, EucoCC and CASTIEL have been working to address these discrepancies. Plus, even though many consider English the lingua franca for high-performance computing, that might not be the case in some countries. The ability to draw on other regions' expertise in HPC and AI, while also being able to translate this knowledge within your own cultural setting has become an important component for advancing Europe's HPC strategy.

Reading your 2012 European strategy paper more than 10 years later, many of the needs you identified have since been addressed in one way or another. In what areas is improvement still needed?

Thinking about a European strategy leads to the discussion of sovereignty. HPC is increasingly considered to be a strategic resource, which means that you can't afford to be too dependent on foreign sources for it, because political relations are uncertain. What does sovereignty mean, though? For Europe this has meant working to develop a homegrown supply chain, a process that is well underway. There are still some missing pieces, though. For example, if you're going to have a completely sovereign market that is walled off with trade barriers, you'd better have at least two competent vendors in each product category so that there is competitive bidding and innovation. Currently, Europe has just one major vendor of its own that is capable of building HPC systems. Processor initiatives are also very important, and Europe is still at the beginning of that trajectory.

“Industrial problems are often just as challenging as scientific problems... HLRS has been one of the few centers — not just in Europe, but in the world — that really understands these things.”

Another important question is that if you have a sovereign market, what's the size of that market? How many vendors can that market sustain at a world-class level? And how many requirements within that geography can be incorporated into your product? There is a tension between protectionism and wanting your vendors to have as large a market as possible. Success means selling to a global market, which also means needing to address a wider range of requirements. One of the things that companies like IBM and Cray learned early is that the only way to produce a world-class product is get it into the hands of users around the world. This is how you identify requirements, which you can then embed in next-generation products.

In practical terms, complete sovereignty is unachievable. Nobody, for example, can build a processor without relying on non-indigenous capabilities, such as manufacturing in Taiwan, supplies of materials like lithium, or advanced lithography from the Netherlands. In this sense, the goal can not be complete independence, but rather complete confidence that your local and nonlocal sources are as secure and as uninterruptible as possible. Pragmatism is also very important for sovereignty. 

You talked earlier about HPC for industry as an area where HLRS has been very active. How have interactions between the academic and industrial worlds changed over your career?

I became interested in the topic of HPC for industry in about 2003, when I led National Science Foundation-funded studies for the Council on Competitiveness in Washington. Most clients at the NSF are small to medium-sized universities, and when we polled their HPC users and the businesses that were using their systems, we discovered that programs for industry access to HPC already existed and were wildly successful. Satisfaction scores, both for companies and the HPC centers serving them, were above 90 percent. This was not what the NSF wanted to hear, though. Their systems were all oversubscribed, with demand from the academic community often two to three times their existing capacity. The last thing they wanted to hear was that they should spend more energy marketing HPC usage to industry. I took this as an important lesson, though. 

When we did another study for NSF in 2016-17, I pointed out that many universities were trying to attract industry due to pressure from their local economic development councils and governments. And they were failing terribly because they didn't know how to do it. This led us to recommended conducting a study to gather best practices in providing HPC for industry. We told them that we were aware of HPC centers — including HLRS — that know how to do it and that it could be very helpful to disseminate that understanding. When we were working on that report, HLRS was very helpful in providing input. 

What benefits have you observed when academic HPC centers work with industry?

When we began the study it was already clear that access to HPC gives companies the ability to develop superior products in shorter timeframes. But what about the benefits for HPC centers? We heard very consistent responses. The biggest advantage was that working with industry enabled the centers to identify new pathways for science. The second was that scientists love working on real-world problems, not just theoretical problems. Being able to incorporate industry applications into the mix helped the HPC centers attract and retain staff scientists and HPC center personnel. 

We were shocked by these findings, because a lot of times the mindset within governments was that opening access to companies was a "mercy move." They thought valuable resources were being wasted to address trivial problems. It turned out that industrial problems are often just as challenging as scientific problems, and that even the scientists endorsed this view. HLRS made important contributions to this study because it has been one of the few centers — not just in Europe, but in the world — that really understands these things. 

An important component of the European AI Factories initiative, which includes HammerHAI, is also to support industry, SMEs, and start-ups. What role do you see them playing? 

AI is still in a very exploratory stage and the number one question at the moment is what the borders are between frontier AI and enterprise AI. When you look closely, frontier AI is running on HPC technology, using everything from HPC infrastructure to MPI, a classic parallel programming standard. They've hired a lot of people with HPC backgrounds to run their programs, and so there's a tight and enduring connection. What is happening in typical business enterprises, though, is almost exclusively focused on increasing individual productivity, and rarely on accelerating new corporate initiatives. These companies will be looking to frontier AI organizations like HLRS and HammerHAI for new ideas to apply to corporate initiatives. 

The other interesting thing about frontier AI is that different technologies are going to be well integrated. For companies like social networks that's not so important, as they are doing pure AI, but for the scientific and industrial research communities that have used HPC, sites like HLRS are going to play a very important role in combining technologies like AI and quantum computing in interesting ways. 

Are there any other big issues that you think could affect how HPC develops in the coming years?

The last piece of the puzzle to me is what's happening in the United States. At the moment we've got a government that is actively combating facts and scientific research. That has expressed itself by making significant cuts to staffing and funding for scientific agencies that have traditionally have done a lot to advance HPC and AI, including the agency that uses HPC to monitor America's nuclear weapons. If this continues unabated, then progress in the United States compared with Europe, China, and Japan could slow. That could change a lot of things, including investment incentives in Europe.

I happen to believe that the country or region that wins will be the one that is best at attracting the best and brightest people from throughout the world. All of a sudden, droves of international students who would have traveled to the US for graduate level education are not coming. It would be very interesting for Europe to develop an initiative to attract them.

Interview by Christopher Williams

This interview has been edited from the original conversation for readability.