Conference proceedings article

Fact or Fiction? Exploring Explanations to Identify Factual Confabulations in RAG-Based LLM Systems



Publication Details
Authors:
Reinhard, P.; Li, M.; Fina, M.; Leimeister, J.
Editor:
TBD
Place:
Yokohama, Japan

Publishing status:
Accepted for publication
Book title:
Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25)
DOI-Link der Erstveröffentlichung:
Languages:
English


Abstract

The adoption of generative artificial intelligence (GenAI) and large language models (LLMs) in society and business is growing rapidly. While these systems often generate convincing and coherent responses, they risk producing incorrect or non-factual information, known as confabulations or hallucinations. Consequently, users must critically assess the reliability of these outputs when interacting with LLM-based agents. Although advancements such as retrieval-augmented generation (RAG) have improved the technical performance of these systems, there is a lack of empirical models that explain how humans detect confabulations. Building on the explainable AI (XAI) literature, we examine the role of reasoning-based explanations in helping users identify confabulations in LLM systems. An online experiment (n = 97) reveals that analogical and factual explanations improve detection accuracy but require more time and cognitive effort than the no explanation baseline.



Keywords
Confabulations, Explainable AI, Generative AI, GenXAI, Hallucinations, LLM, RAG, XAI


Authors/Editors

Last updated on 2025-20-06 at 09:03