Trust in Information

Multidisciplinary research led by the HLRS Department of Philosophy of Computational Sciences is developing perspectives for assessing the trustworthiness of computational science and limiting the spread of misinformation.

The project is divided into six subprojects with independent but connected research questions:

1) Trust in computer intensive methods

We will undertake case studies in the field of engineering and computer simulations (CFD) to understand the relationship between epistemic opacity of computer based methods and different modes of trust (trust in experts, methods, results of methods).

2) Using algorithms to trust

The project will design algorithms to enhance trust for different applications like the interface between science and policy. The resulting tools will facilitate informed decision making in complex environments.

3) Neural networks' vulnerability to deepfake detectors: a case for mistrust in AI?

We analyze how the problem of adversarial examples is addressed in AI research. Such misclassifications do create specific security concerns for real world systems. We first want to know what are the characteristics of current adversarial defense strategies. Secondly we ask how realistic these different threat models are to reduce mistrust in real world AI systems.

4) Trust in medical decision-making systems

Trust is a crucial element in medicine and healthcare. With the introduction and use of computational and informational technologies by physicians in diagnosis and treatment planning as well as by governments and health authorities for policy making in public health, stakeholders will benefit from knowing which clinical decision support they can trust. This project specifies the characteristics of trustworthy medical systems.

5) Trust in computer-aided testimony in criminological contexts

Our subproject regarding criminological contexts reflects on how investigations based on both human and computer-aided testimony gain trust in their results. We analyze the use of virtual crime scene reconstructions to get an understanding of what reasons justify these forms of trust. In whom or what do we trust, when we trust the results of investigations based on witness reports or virtual models?

6) Trust in computer-aided design in urban planning

Simulations are a promising tool for facilitating participatory urban planning processes. However, this requires that the stakeholders involved have sufficient trust in the simulations. In this sub-project, we first address the question of what trust in a simulation means in this context. Second, we want to know how various problematic forms of mistrust and doubt can be overcome? What factors play a role in how people assess techniques as reliable or unreliable and people involved as trustworthy or untrustworthy?

Runtime

01. August 2020 -
30. June 2024

Categories

Philosophy & Ethics

Funding

MWK Baden-Württemberg

Project achievements

Outcomes of the project Trust in Information include the following:

  • Development of a theory of trust for computer-intensive systems, including application of the framework within the subprojects.
  • Three conferences in the series “The Science and Art of Simulation” (SAS): Trust in Science (2021), Trust and Disinformation (2022), Reliability or Trustworthiness? (2023)
  • Two summer schools: Trust in Science (2022), Trust and Machine Learning (2023)
  • Two conference proceedings: Trust in Science, Trust and Disinformation (both forthcoming from Springer)

Future objectives

Follow-up projects will investigate the themes “Reproducibility and Simulation Avoidance” and “Modelling for Policy.”

Funding

MWK Logo: Baden-Württemberg Ministry of Science, Research and Arts

Contact

Nico Formanek

Head, Department of Philosophy of Computational Sciences

+49 711 685-87289 nico.formanek(at)hlrs.de