Höchstleistungsrechenzentrum Stuttgart

The problem of hallucinations in chatbots

Philosopher Ricardo Peraça Cavassane (University of Campinas) will join us online on May 21st, 15:00 to discuss his paper on hallucinations in LLMs. The paper is available on philarchive:

The problem of hallucinations in chatbots based on Large Language Models: an analysis from the perspective of the semantic theory of truth and the theory of quasi-truth.

Pre-read is suggested (but by no means required!), so we can have a fruitful discussion.

You can join the talk and discussion using the following webex link:
https://unistuttgart.webex.com/unistuttgart/j.php?MTID=m1238e2f68c01891619f78708a7d2fb00

Abstract

The so-called “hallucinations” of chatbots based on Large Language Models, like ChatGPT, are often defined as false or nonsensical outputs. We employ formal theories of truth, specifically Tarski’s semantic theory of truth and da Costa’s theory of quasi-truth, to provide clear criteria for determining if an output by a chatbot based on an LLM can be considered true or false, quasi-true or quasi-false, or neither. By doing so, we offer a clearer characterization of the problem of hallucinations in chatbots based on LLMs, more specifically regarding the Natural Language Generation task of Generative Question Answering. We conclude that hallucinations are inherent to the current LLM architectures and that a definitive solution to this problem would require the development of significantly more advanced models, capable of establishing not only probabilistic, but also logical relations between tokens and their grounded semantic counterparts.
 

Veranstaltungsbeginn

21. Mai 2026
15:00

Verstaltungsende

21. Mai 2026

Zurück zur Liste