International Workshop Looks at Trends in Industrial HPC Usage

10 October 2019

Supercomputing centers are taking a wide range of measures to increase access to supercomputing resources for commercial R&D.

Technologies utilizing high-performance computing (HPC) — simulation, data analytics, visualization, and artificial intelligence, for example — are providing valuable knowledge for industry in designing better products and making data-driven decisions. At the same time, however, companies both large and small can require unique kinds of support when accessing and using supercomputing technologies.

These were some key messages at the International Industrial Supercomputing Workshop 2019, hosted by the High-Performance Computing Center Stuttgart (HLRS) on September 25–26, 2019. The meeting — the seventh gathering in this series — attracted senior managers for industrial partnerships from leading supercomputing centers around the world as well as HPC users from industry.

During the meeting, speakers presented examples of how HPC-enabled technologies have had an impact on industrial research at their respective centers, and described programs they have implemented to increase industrial access and productivity on their computing systems. Such examples also illustrated effective strategies for promoting HPC usage in the private sector, and brought into focus some emerging HPC needs in industry.

In addition to HLRS and SICOS-BW, participants came from the Barcelona Supercomputing Center (Spain), Edinburgh Parallel Computing Center (UK), Bosch (Germany), CINECA (Italy), KISTI (South Korea), Toyo University (Japan), Oak Ridge National Laboratory (USA), National Center for Supercomputing Applications (USA), Leibniz Supercomputing Centre (Germany), and PDC Center for High Performance Computing (Sweden) to exchange experiences and insights.

Need for HPC in industry is growing

Several presenters observed that as supercomputers approach exascale, industry is increasingly looking for solutions that involve high-performance computing. In particular, the movement toward what has been called industry 4.0 — which increasingly involves data-driven design of products and services — means that, in addition to the ongoing need for classical applications of HPC for simulation, there is a rapidly growing need for data analytics in industry.

Claudio Arlandini, project manager for HPC in industry at CINECA, an Italian consortium that provides a broad spectrum of IT services, pointed out that growing numbers of companies have been approaching the organization for support in developing and implementing solutions for big data and artificial intelligence. This trend, he suggested, points toward a future in which HPC, big data, and AI will become closely interrelated. “That doesn't mean that we are converging to a computing system that caters to all of them,” Aralandini predicted, “but moving toward workflows that combine the best of these worlds.”

Brendan McGinty from the National Center for Supercomputing Applications (NCSA) in the United States made a similar observation: "We should be using the term confluence. AI and HPC will maintain their identities, though will merge to the point that it becomes difficult to see what is HPC and what is AI." McGinty explained that such integrated approaches will become increasingly important for companies in the finance, insurance, agriculture, and pharmaceutical industries, for example, as they have begun collecting large amounts of data, but are only beginning to understand what to do with it.

Suzy Tichenor, who manages industrial partnerships at Oak Ridge National Laboratory, explained that companies have many reasons for partnering with nonprofit HPC centers. For one, working on large-scale supercomputers enables them to pursue breakthrough research that would be impossible using their own computing systems. Having access to resources at HPC centers also enables them to test new methods, gain insights about larger and more advanced computing systems that will only become available to industry in the future, and try out pilot projects that will enable them to justify the economic benefits of expanding their own internal computing platforms.

Tichenor also gave an overview of industry participation in the Exascale Computing Project, a large, multiyear project in the United States to develop the country's first exascale computer. She explained that the collaboration is showing benefits for both industry and the government agencies developing the new system: industry's needs and concerns are being incorporated into the planning of the new system, while the government and academic community are benefiting from industrial expertise in managing large-scale projects.

Supporting companies is the key

When companies like Amazon first began offering cloud computing services several years ago, some feared that it would reduce the demand for academic and national supercomputing centers. As participants in the meeting reported, however, many in industry recognize the advantages of working with nonprofit centers.

"Companies want to work with EPCC as opposed to Amazon Cloud because they can pick up the phone and call someone," remarked Marc Parsons, director of the Edinburgh Parallel Computing Center. Indeed, several of the participants commented that personal contact is important for building trust and developing relationships that over time enable companies to integrate HPC into their R&D pipelines.

In addition to providing basic helpdesk support, many of the HPC centers represented at the meeting have also developed internal expertise in scientific and engineering disciplines that are important to their user communities. For several of the centers, this means creating discipline-specific competence centers staffed with scientists who clearly understand specific HPC application areas, as well as the computer science methods that can best support them.

Andreas Wierse of SICOS BW — a nonprofit organization based at HLRS that facilitates access to HPC for small and medium-sized enterprises — identified several key steps that HPC centers can take to improve the successful use of HPC in industry. These include offering flexible access to the computers and making it easy for companies to try out large computing systems without having to commit to a large risk. Also important is making the systems easy to use and providing excellent technical support. As HLRS director Michael Resch emphasized, offering training that gives staff at companies the specialized skills they need to take advantage of HPC is also essential. Such efforts are the keys to building capabilities within small companies.

Tracking and identifying success stories is also essential when working with industrial users, said HLRS's Bastian Koller, leader of the Fortissimo project, which provides a "one stop shop" for small companies interested in incorporating HPC in their work. NCSA's Brendan McGinty also pointed out the importance of defining the return on investment that companies achieve when starting out with HPC. Such cases clearly demonstrate the advantages of incorporating simulation, artificial intelligence, visualization, or high performance data analytics into the design of new and better products and methods.

Furthermore, the personal contacts that academic HPC centers enable can promote the development of communities that can benefit the larger engineering community. At HLRS, for example, solution centers focused on the automotive and media industries bring together stakeholders in a precompetitive framework where they can focus on the development of computing tools that will benefit all. As the Leibniz Supercomputing Centre's Laura Schulz also explained, HPC centers can promote the integration between universities and industry in ways that contribute to innovation on a regional basis.

In addition to formal presentations, the gathering also offered generous time for discussion and networking, promoting a lively international exchange of insights and experiences among the attendees.

Christopher Williams