A Survey on Evaluation of Large Language Models

被引:631
作者
Chang, Yupeng [1 ]
Wang, Xu [1 ]
Wang, Jindong [2 ]
Wu, Yuan [1 ]
Yang, Linyi [3 ]
Zhu, Kaijie [4 ]
Chen, Hao [5 ]
Yi, Xiaoyuan [2 ]
Wang, Cunxiang [3 ]
Wang, Yidong [6 ]
Ye, Wei [6 ]
Zhang, Yue [3 ]
Chang, Yi [1 ]
Yu, Philip S. [7 ]
Yang, Qiang [8 ]
Xie, Xing [2 ]
机构
[1] Jilin Univ, Sch Artificial Intelligence, 2699 Qianjin St, Changchun 130012, Peoples R China
[2] Microsoft Res Asia, Beijing, Peoples R China
[3] Westlake Univ, Hangzhou, Peoples R China
[4] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[5] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[6] Peking Univ, Beijing, Peoples R China
[7] Univ Illinois, Chicago, IL 60680 USA
[8] Hong Kong Univ Sci & Technol, Kowloon, Hong Kong, Peoples R China
关键词
Large language models; evaluation; model assessment; benchmark; K-FOLD; PERFORMANCE; VALIDITY;
D O I
10.1145/3641289
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, education, natural and social sciences, agent applications, and other areas. Secondly, we answer the 'where' and 'how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing the performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval- survey
引用
收藏
页数:45
相关论文
共 267 条
[1]  
Abdelali Ahmed, 2023, arXiv
[2]  
2023, Arxiv, DOI arXiv:2303.08774
[3]  
Ahuja K, 2023, Arxiv, DOI [arXiv:2303.12528, DOI 10.48550/ARXIV.2303.12528]
[4]  
Arora D, 2023, Arxiv, DOI arXiv:2305.15074
[5]  
Askell A, 2021, Arxiv, DOI [arXiv:2112.00861, DOI 10.48550/ARXIV.2112.00861]
[6]  
Bai YS, 2023, Arxiv, DOI arXiv:2306.04181
[7]  
Bang Y, 2023, Arxiv, DOI [arXiv:2302.04023, 10.48550/arXiv.2302.04023]
[8]  
Belz Anja, 2006, 11 C EUR CHAPT ASS C, P313
[9]  
Berrar D., 2019, Encyclopedia of Bioinformaticsand Computational Biology: Cross-Validation
[10]  
Bian N, 2024, Arxiv, DOI [arXiv:2303.16421, DOI 10.48550/ARXIV.2303.16421]