Beitrag in einem Tagungsband

Fact or Fiction? Exploring Explanations to Identify Factual Confabulations in RAG-Based LLM Systems



Details zur Publikation
Autor(inn)en:
Reinhard, P.; Li, M.; Fina, M.; Leimeister, J.
Herausgeber:
TBD
Verlagsort / Veröffentlichungsort:
Yokohama, Japan

Veröffentlichungsstatus:
Angenommen zur Veröffentlichung
Buchtitel:
Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25)
DOI-Link der Erstveröffentlichung:
Sprachen:
Englisch


Zusammenfassung, Abstract

The adoption of generative artificial intelligence (GenAI) and large language models (LLMs) in society and business is growing rapidly. While these systems often generate convincing and coherent responses, they risk producing incorrect or non-factual information, known as confabulations or hallucinations. Consequently, users must critically assess the reliability of these outputs when interacting with LLM-based agents. Although advancements such as retrieval-augmented generation (RAG) have improved the technical performance of these systems, there is a lack of empirical models that explain how humans detect confabulations. Building on the explainable AI (XAI) literature, we examine the role of reasoning-based explanations in helping users identify confabulations in LLM systems. An online experiment (n = 97) reveals that analogical and factual explanations improve detection accuracy but require more time and cognitive effort than the no explanation baseline.



Schlagwörter
Confabulations, Explainable AI, Generative AI, GenXAI, Hallucinations, LLM, RAG, XAI


Autor(inn)en / Herausgeber(innen)

Zuletzt aktualisiert 2025-20-06 um 09:03