ZeroSCROLLS: A Zero-Shot Benchmark for Long Text Understanding

被引:0
作者
Shahamt, Uri [1 ]
Ivgit, Maor [1 ]
Efratt, Avia [1 ]
Berantt, Jonathan [1 ]
Levytmu, Omer [1 ,2 ]
机构
[1] Tel Aviv Univ, Blavatnik Sch Comp Sci, Tel Aviv, Israel
[2] Meta AI, New York, NY USA
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce ZeroSCROLLS, a zero-shot benchmark for natural language understanding over long texts, which contains only test and small validation sets, without training data. We adapt six tasks from the SCROLLS benchmark, and add four new datasets, including two novel information fusing tasks, such as aggregating the percentage of positive reviews. Using ZeroSCROLLS, we conduct a comprehensive evaluation of both open-source and closed large language models, finding that Claude outperforms ChatGPT, and that GPT-4 achieves the highest average score. However, there is still room for improvement on multiple open challenges in ZeroSCROLLS, such as aggregation tasks, where models struggle to pass the naive baseline. As the state of the art is a moving target, we invite researchers to evaluate their ideas on the live ZeroSCROLLS leaderboard.(1)
引用
收藏
页码:7977 / 7989
页数:13
相关论文
共 34 条
[1]  
Ainslie Joshua, 2023, Colt5: Faster long-range transformers with conditional computation
[2]   Extractive Opinion Summarization in Quantized Transformer Spaces [J].
Angelidis, Stefanos ;
Amplayo, Reinald Kim ;
Suhara, Yoshihiko ;
Wang, Xiaolan ;
Lapata, Mirella .
TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2021, 9 :277-293
[3]  
[Anonymous], 2023, GPT-4 Technical Report, DOI DOI 10.48550/ARXIV.2303.08774
[4]  
Bertsch Amanda., 2023, Unlimiformer: Long-range transformers with unlimited length input
[5]  
Chen MD, 2022, PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), P8602
[6]  
Chung Hyung Won, 2023, 11 INT C LEARN REPR
[7]  
Dasigi P, 2021, 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), P4599
[8]  
Dey Manan, 2022, 10 INT C LEARN REPR
[9]  
Efrat Avia, 2020, The turking test: can language models understand instructions?
[10]  
Guo M., 2022, FINDINGS ASS COMPUTA, P724