• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2026, Vol. 48 ›› Issue (2): 245-255.

• Artificial Intelligence and Data Mining • Previous Articles     Next Articles

A LLMs hallucination mitigation method based on causal relationship

LI He,CHI Haoang,LIU Mingyu,YANG Wenjing   

  1. (College of Computer Science and Technology,National University of Defense Technology,Changsha 410073,China)
  • Received:2024-08-24 Revised:2024-10-10 Online:2026-02-25 Published:2026-03-10

Abstract: The emergence of large language models (LLMs) marks a milestone in generative artificial intelligence, achieving remarkable success in text comprehension and generation tasks. Although LLMs have demonstrated tremendous success in numerous downstream tasks, they also suffer from severe hallucination issues, posing significant challenges to their practical applications. While the self-attention mechanism in Transformer-based LLMs is a crucial module, existing literature rarely explores the hallucination phenomenon in LLMs from the perspective of the self-attention mechanism. To fill this research gap, this study investigates the issue from a causal relationship standpoint. Specifically, a method is proposed to disable self-attention layers without altering the structure of the LLMs. Experiments are conducted by disabling different self-attention layers in multiple open-source LLMs, evaluating these intervened LLMs on hallucination assessment benchmarks, and comparing their hallucination levels with the original models. The experimental results indicate that disabling certain specific self-attention layers in the front or tail sections of LLMs can alleviate the hallucination problem.

Key words: large language models (LLMs), hallucinations of large language models, causal representation learning