Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models

被引:37
作者
Chen, Yuyan [1 ]
Fu, Qiang [2 ]
Yuan, Yichen [3 ]
Wen, Zhihao [4 ]
Fan, Ge [5 ]
Liu, Dayiheng [6 ]
Zhang, Dongmei [2 ]
Li, Zhixu [1 ]
Xiao, Yanghua [7 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R China
[2] Microsoft, Beijing, Peoples R China
[3] Shanghai Key Lab Data Sci, Shanghai, Peoples R China
[4] Singapore Management Univ, Singapore, Singapore
[5] Tencent, Shenzhen, Peoples R China
[6] DAMO Acad, Hangzhou, Peoples R China
[7] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Fudan Aishu Cognit Intelligence Joint Res, Shanghai, Peoples R China
来源
PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023 | 2023年
基金
中国国家自然科学基金;
关键词
Hallucination Detection; Large Language Models; Reliable Answers;
D O I
10.1145/3583780.3614905
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs) have gained widespread adoption in various natural language processing tasks, including question answering and dialogue systems. However, a major drawback of LLMs is the issue of hallucination, where they generate unfaithful or inconsistent content that deviates from the input source, leading to severe consequences. In this paper, we propose a robust discriminator named RelD to effectively detect hallucination in LLMs' generated answers. RelD is trained on the constructed RelQA, a bilingual question-answering dialogue dataset along with answers generated by LLMs and a comprehensive set of metrics. Our experimental results demonstrate that the proposed RelD successfully detects hallucination in the answers generated by diverse LLMs. Moreover, it performs well in distinguishing hallucination in LLMs' generated answers from both in-distribution and out-of-distribution datasets. Additionally, we also conduct a thorough analysis of the types of hallucinations that occur and present valuable insights. This research significantly contributes to the detection of reliable answers generated by LLMs and holds noteworthy implications for mitigating hallucination in the future work.
引用
收藏
页码:245 / 255
页数:11
相关论文
共 79 条
[1]  
Aiyappa R, 2024, Arxiv, DOI [arXiv:2303.12767, 10.48550/arXiv.2303.12767]
[2]  
Azaria A, 2023, Arxiv, DOI arXiv:2304.13734
[3]  
Bang Y, 2023, Arxiv, DOI [arXiv:2302.04023, DOI 10.48550/ARXIV.2302.04023]
[4]  
Bengio S, 2015, ADV NEUR IN, V28
[5]  
BenWang Aran, 2021, GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model
[6]  
Borji A, 2023, Arxiv, DOI [arXiv:2302.03494, DOI 10.48550/ARXIV.2302.03494, 10.48550/arxiv.2302.03494]
[7]  
Brown TB, 2020, ADV NEUR IN, V33
[8]  
Chalmers DJ, 2024, Arxiv, DOI [arXiv:2303.07103, 10.48550/arXiv.2303.07103, DOI 10.48550/ARXIV.2303.07103]
[9]  
Chen SH, 2021, Arxiv, DOI arXiv:2104.09061
[10]  
Chen Yongcong, Apreliminary STUDY ON THE CAPABILITY BOUNDARY OF LLM AND A NEW IMPLEMENTATION APPROACH FOR AGI