How Can We Know What Language Models Know?

被引:653
作者
Jiang, Zhengbao [1 ]
Xu, Frank F. [1 ]
Araki, Jun [2 ]
Neubig, Graham [1 ]
机构
[1] Carnegie Mellon Univ, Language Technol Inst, Pittsburgh, PA 15213 USA
[2] Bosch Res North Amer, Palo Alto, CA USA
基金
美国国家科学基金会;
关键词
Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as ‘‘Obama is a by profession’’. These prompts are usually manually created; and quite possibly sub-optimal; another prompt such as ‘‘Obama worked as a ’’ may result in more accurately predicting the correct profession. Because of this; given an inappropriate prompt; we might fail to retrieve facts that the LM does know; and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper; we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically; we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts; as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%; providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA. © 2020 Association for Computational Linguistics;
D O I
10.1162/tacl_a_00324
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession''. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as "Obama worked as a_'' may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA.
引用
收藏
页码:423 / 438
页数:16
相关论文
共 53 条
[1]  
Agichtein E., 2000, ACM 2000. Digital Libraries. Proceedings of the Fifth ACM Conference on Digital Libraries, P85, DOI 10.1145/336597.336644
[2]  
Ahn Sungjin, 2016, ARXIV PREPRINT ARXIV
[3]  
Banko M, 2007, 20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2670
[4]   What do Neural Machine Translation Models Learn about Morphology? [J].
Belinkov, Yonatan ;
Durrani, Nadir ;
Dalvi, Fahim ;
Sajjad, Hassan ;
Glass, James .
PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, :861-872
[5]  
Belinkov Yonatan, 2019, C N AM CHAPTER ASS C, V7, P49
[6]  
Bhagat Rahul., 2008, P ASS COMPUTATIONAL, P674
[7]  
Bouraoui Z, 2020, AAAI CONF ARTIF INTE, V34, P7456
[8]  
Burges C.J.C., 2011, Technical Report MSR-TR-2011-129
[9]   A Survey of Automatic Query Expansion in Information Retrieval [J].
Carpineto, Claudio ;
Romano, Giovanni .
ACM COMPUTING SURVEYS, 2012, 44 (01)
[10]  
Dai AM, 2015, ADV NEUR IN, V28