A Mechanism for Sample-Efficient In-Context Learning for Sparse Retrieval Tasks

被引:0
作者
Abernethy, Jacob [1 ,2 ]
Agarwal, Alekh [1 ]
Marinov, Teodor V. [1 ]
Warmuth, Manfred K. [1 ]
机构
[1] Google Res, Mountain View, CA 94043 USA
[2] Georgia Inst Technol, Atlanta, GA 30332 USA
来源
INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 237 | 2024年 / 237卷
关键词
in-context learning; transformers;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study the phenomenon of in-context learning (ICL) exhibited by large language models, where they can adapt to a new learning task, given a handful of labeled examples, without any explicit parameter optimization. Our goal is to explain how a pre-trained transformer model is able to perform ICL under reasonable assumptions on the pre-training process and the downstream tasks. We posit a mechanism whereby a transformer can achieve the following: (a) receive an i.i.d. sequence of examples which have been converted into a prompt using potentially-ambiguous delimiters, (b) correctly segment the prompt into examples and labels, (c) infer from the data a sparse linear regressor hypothesis, and finally (d) apply this hypothesis on the given test example and return a predicted label. We establish that this entire procedure is implementable using the transformer mechanism, and we give sample complexity guarantees for this learning framework. Our empirical findings validate the challenge of segmentation, and we show a correspondence between our posited mechanisms and observed attention maps for step (c).
引用
收藏
页数:44
相关论文
共 27 条
[1]  
Ahn K, 2023, Arxiv, DOI arXiv:2306.00297
[2]  
Akyürek E, 2023, Arxiv, DOI [arXiv:2211.15661, 10.48550/arXiv.2211.15661]
[3]  
Andrychowicz M, 2016, ADV NEUR IN, V29
[4]  
[Anonymous], 1996, U. S. Geological. Survey Open-File Report OF-96-254
[5]  
[Anonymous], 2019, INT C MACHINE LEARNI
[6]  
Bai Y, 2023, Arxiv, DOI arXiv:2306.04637
[7]  
Brown TB, 2020, ADV NEUR IN, V33
[8]  
Chan Stephanie C. Y., 2022, ADV NEUR IN
[9]  
Dai DM, 2022, Arxiv, DOI arXiv:2212.10559
[10]  
Edelman Benjamin L., 2022, PR MACH LEARN RES