TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models

被引:0
作者
Gekhman, Zorik [1 ,2 ]
Herzig, Jonathan [2 ]
Aharoni, Roee [2 ]
Elkind, Chen [2 ]
Szpektor, Idan [2 ]
机构
[1] Technion Israel Inst Technol, Haifa, Israel
[2] Google Res, Mountain View, CA 94043 USA
来源
2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Factual consistency evaluation is often conducted using Natural Language Inference (NLI) models, yet these models exhibit limited success in evaluating summaries. Previous work improved such models with synthetic training data. However, the data is typically based on perturbed human-written summaries, which often differ in their characteristics from real model-generated summaries and have limited coverage of possible factual errors. Alternatively, large language models (LLMs) have recently shown promising results in directly evaluating generative tasks, but are too computationally expensive for practical use. Motivated by these limitations, we introduce TrueTeacher, a method for generating synthetic data by annotating diverse model-generated summaries using a LLM. Unlike prior work, TrueTeacher does not rely on human-written summaries, and is multilingual by nature. Experiments on the TRUE benchmark show that a student model trained using our data, substantially outperforms both the state-of-the-art model with similar capacity, and the LLM teacher. In a systematic study, we compare TrueTeacher to existing synthetic data generation methods and demonstrate its superiority and robustness to domain-shift. We also show that our method generalizes to multilingual scenarios. Lastly, we release our large-scale synthetic dataset (1.4M examples), generated using TrueTeacher, and a checkpoint trained on this data.(1)
引用
收藏
页码:2053 / 2070
页数:18
相关论文
共 59 条
[1]  
Agrawal Priyanka, 2022, arXiv
[2]  
Aharoni Roee, 2022, ARXIV
[3]  
Aiyappa Rachith, 2023, ARXIV
[4]  
[Anonymous], 57 ANN M ASS COMP
[5]  
[Anonymous], 2022, OpenAI
[6]  
Balachandran V., 2022, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, P9818
[7]  
Banko M, 2007, 20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2670
[8]  
Bao SQ, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P85
[9]  
Bitton Yonatan, 2023, q2d: Turning questions into dialogs to teach models how to search
[10]  
Chen S., 2023, arXiv