Large Language Models are Zero-Shot Reasoners

被引:0
作者
Kojima, Takeshi [1 ]
Gu, Shixiang Shane [2 ]
Reid, Machel [3 ]
Matsuo, Yutaka [1 ]
Iwasawa, Yusuke [1 ]
机构
[1] Univ Tokyo, Tokyo 1138654, Japan
[2] Google Res, Brain Team, Mountain View, CA USA
[3] Google Res, London, England
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022 | 2022年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding "Let's think step by step" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large-scale InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.
引用
收藏
页数:15
相关论文
共 48 条
[1]  
Ahn M, 2022, Do as I. can not as I. say: Grounding language in robotic affordances
[2]  
[Anonymous], **DATA OBJECT**, DOI DOI 10.5281/ZENODO.5297715
[3]  
Chollet Francois, 2019, On the Measure of Intelligence
[4]  
Chowdhery Aakanksha, 2022, Palm: Scaling language modeling with pathways
[5]  
Cobbe Karl, 2021, Training Verifiers to Solve Math Word Problems
[6]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[7]  
Gao TY, 2021, 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, P3816
[8]   Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies [J].
Geva, Mor ;
Khashabi, Daniel ;
Segal, Elad ;
Khot, Tushar ;
Roth, Dan ;
Berant, Jonathan .
TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2021, 9 :346-361
[9]  
Hosseini Mohammad Javad, 2014, EMNLP, V523533
[10]   The structure of human intelligence: It is verbal, perceptual, and image rotation (VPR), not fluid and crystallized [J].
Johnson, W ;
Bouchard, TJ .
INTELLIGENCE, 2005, 33 (04) :393-416