Building Trust in the Face of Disinformation

Keyvisual image main

A conference organized by HLRS explored the origins and nature of disinformation, its effects on public opinion, and potential strategies for fighting against it.

The tendency of individuals to believe and act on bad information is nothing new. The recent rise of social media and its effects on the polarization of public opinion, however, have made the need to understand how and why this happens more urgent than ever.

With support from the Baden-Württemberg Ministry of Science, Research and the Arts, the HLRS Department of Philosophy of Computational Sciences has been conducting research on how to assess and improve the trustworthiness of information, both in the computational sciences and in society at large. In a recent three-day, international conference titled “Trust & Disinformation” HLRS invited researchers to Stuttgart to explore how disinformation — together with other technological, sociological, institutional, and political factors — can both lead to mistaken trust in false information and damage the shared sense of trust necessary for society to function.

Origins of a crisis of trust

An appropriate trust in the best available information is essential for healthy public discourse and democratic decision making. As several conference speakers pointed out, however, an increasingly complex information landscape has resulted not only in a proliferation of false information, but also in a crisis of trust. The growing prominence of misinformation (information that isn’t true), disinformation (false information disseminated with intent to deceive), and malinformation (information that might contain grains of fact, but is intentionally taken out of context or incorrectly explained) has created a widespread sense of what scientists call epistemic uncertainty. In this case, confusion and disagreement concerning fundamental facts make public conversation and decision making difficult.

Since the Brexit vote and the 2016 United States presidential election, concepts such as filter bubbles, confirmation bias, and echo chambers have become shorthand for ways in which users’ interactions with social media expose them to disinformation, segregate them in groups, and prevent access to alternative viewpoints. Several talks at the conference suggested that the time has come to question the usefulness of such terms, however.

The tendency of individuals to affiliate with likeminded others is not inherently a problem, argued keynote speaker David Coady (University of Tasmania), but becomes so when it reinforces the spread of disinformation. In a similar way, he suggested, positions disregarded as conspiracy theories or extremist ideas can over time turn out to be true or socially accepted. For this reason, he advocated for the importance of analyzing the substance of such claims and not their structure.

Moreover, speakers explained, the tendency of an individual to believe disinformation is not purely a result of social media algorithms, but results from a complex set of factors including individual beliefs and cognitive processes, social relationships, education, and even pattern-seeking functions in the brain that have benefited human evolution. Countering disinformation successfully will require a more refined understanding of how these and other factors interact.

Strategies against disinformation

Speakers at the conference, including keynote speaker Hendrik Heuer (Harvard University), also discussed the potential advantages and feasibility of measures for fighting against the spread of disinformation. Some possible approaches include media literacy campaigns, web plugins that rate trustworthiness or facilitate fact checking, algorithms for deleting disinformation from the web, or even crowdsourcing alerts of disinformation.

Although such measures sound attractive and some might even be technically feasible, speakers at the conference repeatedly returned to a critical problem that calls their desirability into question: Who should have the authority to distinguish “good” information from “bad” information? This political question raises a host of issues concerning regulation of individual rights and free will, and presents new challenges with respect to legitimacy and trust.

Discussions at the HLRS conference demonstrated that there will be no easy solutions to the problem of disinformation, although gaining a better understanding of the problem will be important in efforts to recover public trust.

Christopher Williams