Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph

被引:0
作者
Vashurin, Roman [1 ]
Fadeeva, Ekaterina [2 ]
Vazhentsev, Artem [3 ]
Rvanova, Lyudmila [6 ,7 ]
Vasilev, Daniil [4 ]
Tsvigun, Akim [5 ]
Petrakov, Sergey [3 ]
Xing, Rui [1 ,8 ]
Sadallah, Abdelrahman
Grishchenkov, Kirill
Panchenko, Alexander [3 ]
Baldwin, Timothy [1 ,8 ]
Nakov, Preslav [1 ]
Panov, Maxim [1 ]
Shelmanov, Artem [1 ]
机构
[1] MBZUAI, Abu Dhabi, U Arab Emirates
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Ctr Artificial Intelligence Technol, Moscow, Russia
[4] HSE Univ, Moscow, Russia
[5] Nebius, Schiphol, Netherlands
[6] Lab Anal & Controllable Text Generat Technol RAS, Moscow, Russia
[7] Weakly Supervised NLP Grp, Moscow, Russia
[8] Univ Melbourne, Parkville, VIC, Australia
关键词
D O I
10.1162/tacl_a_00737
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid proliferation of large language models (LLMs) has stimulated researchers to seek effective and efficient approaches to deal with LLM hallucinations and low-quality outputs. Uncertainty quantification (UQ) is a key element of machine learning applications in dealing with such challenges. However, research to date on UQ for LLMs has been fragmented in terms of techniques and evaluation methodologies. In this work, we address this issue by introducing a novel benchmark that implements a collection of state-of-the-art UQ baselines and offers an environment for controllable and consistent evaluation of novel UQ techniques over various text generation tasks. Our benchmark also supports the assessment of confidence normalization methods in terms of their ability to provide interpretable scores. Using our benchmark, we conduct a large-scale empirical investigation of UQ and normalization techniques across eleven tasks, identifying the most effective approaches.
引用
收藏
页码:220 / 248
页数:29
相关论文
共 72 条
  • [1] Ashukha Arsenii., 2019, INT C LEARN REPR
  • [2] Barrault L, 2019, FOURTH CONFERENCE ON MACHINE TRANSLATION (WMT 2019), P1
  • [3] Bellagente M, 2024, Arxiv, DOI arXiv:2402.17834
  • [4] Blundell C, 2015, PR MACH LEARN RES, V37, P1613
  • [5] Bojar Ondrej, 2014, P 9 WORKSHOP STAT MA
  • [6] Chen C., 2023, arXiv
  • [7] Cobbe K, 2021, Arxiv, DOI arXiv:2110.14168
  • [8] Darrin Maxime., 2023, P 2023 C EMPIRICAL M, P5831, DOI 10.18653/v1/2023.emnlp-main.357
  • [9] Duan JH, 2024, PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, P5050
  • [10] Dziri N, 2022, NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, P5271