The project is divided into six subprojects with independent but connected research questions:
We will undertake case studies in the field of engineering and computer simulations (CFD) to understand the relationship between epistemic opacity of computer based methods and different modes of trust (trust in experts, methods, results of methods).
The project will design algorithms to enhance trust for different applications like the interface between science and policy. The resulting tools will facilitate informed decision making in complex environments.
We analyze how the problem of adversarial examples is addressed in AI research. Such misclassifications do create specific security concerns for real world systems. We first want to know what are the characteristics of current adversarial defense strategies. Secondly we ask how realistic these different threat models are to reduce mistrust in real world AI systems.
Trust is a crucial element in medicine and healthcare. With the introduction and use of computational and informational technologies by physicians in diagnosis and treatment planning as well as by governments and health authorities for policy making in public health, stakeholders will benefit from knowing which clinical decision support they can trust. This project specifies the characteristics of trustworthy medical systems.
Our subproject regarding criminological contexts reflects on how investigations based on both human and computer-aided testimony gain trust in their results. We analyze the use of virtual crime scene reconstructions to get an understanding of what reasons justify these forms of trust. In whom or what do we trust, when we trust the results of investigations based on witness reports or virtual models?
Simulations are a promising tool for facilitating participatory urban planning processes. However, this requires that the stakeholders involved have sufficient trust in the simulations. In this sub-project, we first address the question of what trust in a simulation means in this context. Second, we want to know how various problematic forms of mistrust and doubt can be overcome? What factors play a role in how people assess techniques as reliable or unreliable and people involved as trustworthy or untrustworthy?
01. August 2020
01. August 2023
Philosophy & Ethics
Philosophy of Computational Sciences
Baden-Württemberg Ministry of Science, Research and Arts
See all projects
Head, Philosophy of Computational Sciences
High-Performance Computing Center Stuttgart
Nobelstraße 19, 70569 Stuttgart, Germany
+49 (0) 711 / 685-87 209
A member of the Gauss Centre for Supercomputing, HLRS is one of three German national centers for high-performance computing.
HLRS is a central unit of the University of Stuttgart.