New HLRS Research Program to Focus on Trust and Information

Keyvisual image main
Dr. Andreas Kaminski, Head of the HLRS Department of Philosophy of Computational Sciences, will lead the project "Vertrauen in Information." Photo courtesy of Dr. Kaminski.

Multidisciplinary research led by the HLRS Department of Philosophy will develop perspectives for assessing the trustworthiness of computational science and limiting the spread of misinformation.

A few short decades ago, the arrival of personal computers, the Internet, and powerful supercomputers for scientific research promised to improve society by making it easier to produce and share information. Recent developments have called this optimism into question, however. With growing concerns, for example, about how personal data and artificial intelligence (AI) can be used and misused, or about the corrosive effects of social media disinformation campaigns on important public debates, there is an increasing anxiety about how to ensure that the digital information that citizens and policy makers rely on can be trusted.

This growing distrust of information comes at the same time when the computational sciences are becoming more important than ever for responding to urgent global challenges. Computer models are necessary, for example, for predicting and responding to the effects of climate change, helping public health officials forecast the spread of viruses like COVID-19, and developing new technologies for improving environmental sustainability. In international relations, economics and finance, crisis management, and many other sectors, computer models can provide invaluable insights to support data-based decision making.

For this reason, building trust in information requires addressing a range of problems related to how information is generated, distributed, and received. How can scientists ensure, for example, that the models they develop are trustworthy and form a basis for public debate? And how can people who access digital information be in a better position to distinguish between trustworthy information and misleading propaganda?

A new three-year project recently launched at the High-Performance Computing Center Stuttgart (HLRS) aims to address such questions. With the support of a grant of approximately €550,000 from the Baden-Württemberg Ministry of Science, Research and Art, a team led by Dr. Andreas Kaminski of the HLRS Department of Philosophy of Computational Sciences will bring together philosophers, social scientists, technologists, and other experts to investigate trust in the context of information technology. The project will produce insights for improving trustworthiness in computational research, for developing AI-based approaches for judging reliability of information, and for fighting deception in digital media.

"Currently there is a sense that scientific computing and information technology are approaching a crossroads," said HLRS Director Prof. Michael Resch. "At the same time that they have so much to offer, there is a danger that growing skepticism could limit their contributions to solving important global challenges. In this project, HLRS intends to get out ahead of this issue to help computer scientists create more trustworthy algorithms, limit the impact of nefarious uses of digital media, and give policy makers the means to better evaluate the trustworthiness of the information they consume."

Alive in the sea of information

Despite the advantages of easy access to scientific information, a number of factors can make its trustworthiness difficult to evaluate. For one thing, scientific research is complicated and evaluating its reliability requires expertise that is not available to many people, even though they rely on science to guide their decision-making. At the same time, the increasing complexity of computational algorithms — for example in machine learning applications — can mean that even the scientists involved in developing them can't always know exactly how their results were generated, leaving them in a position where they must trust what happens inside a so-called "black box."

Picture of face comparison

One of these faces is real and the other was generated by artificial intelligence. Can you tell which is which? Applications such as these highlight problems of trustworthiness that can arise with digital information. Source: "Which Face Is Real?" Used with permission.

For policy makers and the general public, limitations on insight such as these can make it difficult to know when to trust scientific information. When combined with the fact that digital media typically intervene in consumers' access to that information, the question of trust can become even more fraught.

As Kaminski explained, "Every day we are confronted with large amounts of information that are relevant for our lives in many different ways. However, as individuals we often don't have the necessary experience or expertise to evaluate that information thoroughly. This means that we must rely on others to help us understand whether the information we receive can be trusted as a basis for our own opinions and decision making." For Kaminski, a philosopher of science and technology, this creates a problem of epistemology; that is, of understanding how we can be sure that the things we think we know are actually true.

The new project at HLRS thus assumes that improving the trustworthiness of digital information is not just a technical challenge, but will need to engage with questions of how humans perceive information, as well as how trust is built between individuals and within communities. Kaminski and his team members will consider the question of trust and information from a multidisciplinary perspective, bringing together expertise in fields such as psychology, sociology, political science, economics, pedagogy, and history that have long engaged with questions related to how trust is created or broken. Through collaborative research projects, workshops, conferences, and publications, experts from these fields will work together to develop a theoretical basis for improving trustworthiness in the development of simulation and AI technologies.

Also important in the new project will be close collaboration with the HiDALGO project, which is focusing on the development of high-performance computing solutions to address global challenges, and the HLRS Sociopolitical Advisory Board, which has been helping to orient HLRS's activities toward topics where supercomputing could provide direct societal benefits.

In addition, Dr. Sebastian Hallensleben, who is currently establishing an Information Integrity Laboratory at the technology organization VDE, will be an important cooperation partner. He and Kaminski are currently collaborating on a variety of projects focusing on trust in information technology. Commenting on the significance of these efforts, Hallensleben said, “The wide availability of AI-based fabrication tools since 2018, including deepfakes and GPT2/3, makes it possible to unleash large numbers of convincing bots and to overwhelm the digital space with targeted fakes, thus thwarting trust and constructive discourse. Detection tools for fakes are only part of the solution, however; we need radical new concepts for creating privacy-preserving and authentic identities.”

Case studies to focus on trustworthiness of advanced applications of simulation

The new project led by HLRS will study the technological and social origins of mistrust of information created using computing. It will also consider complex questions about the feasibility of using automated systems to identify misinformation in the media and contain its spread. In addition to conducting theoretical research, the project will conduct six case studies that look at specific applications of simulation and AI that raise questions of trustworthiness.

One case study, for example, will consider the use of simulation in climate research. Here, the question is not just how scientists could overcome public skepticism about the reality of climate change, but also how to address the risk of trusting scientific models uncritically. Research will also look at artificial intelligence tools designed to identify fake news and deepfakes (AI-generated videos that falsely depict a person doing or saying something that didn't actually happen). One important challenge here is to clearly articulate the logical, technological, and social frameworks that will be needed to create AI tools that could reliably distinguish between real and fake information.

Additional case studies will look at issues relevant for other research activities at HLRS, including how new computational tools for medicine affect relationships between doctors and their patients, limitations of visualization tools for virtual crime scene reconstructions and autopsies, and how to build trustworthiness in models of air and noise pollution.

Through publications and other outreach, HLRS also will also share the insights it develops with the widest possible community.

Christopher Williams