Reliable Gradient-free and Likelihood-free Prompt Tuning

被引:0
作者
Shen, Maohao [1 ]
Ghosh, Soumya [3 ]
Sattigeri, Prasanna [3 ]
Das, Subhro [3 ]
Bu, Yuheng [2 ]
Wornell, Gregory [1 ]
机构
[1] MIT, Cambridge, MA 02139 USA
[2] Univ Florida, Gainesville, FL USA
[3] IBM Res, MIT IBM Watson Lab, Cambridge, MA USA
来源
17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023 | 2023年
关键词
MONTE-CARLO; ADAPTATION; INFERENCE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to privacy or commercial constraints, large pre-trained language models (PLMs) are often offered as black-box APIs. Fine-tuning such models to downstream tasks is challenging because one can neither access the model's internal representations nor propagate gradients through it. This paper addresses these challenges by developing techniques for adapting PLMs with only API access. Building on recent work on soft prompt tuning, we develop methods to tune the soft prompts without requiring gradient computation. Further, we develop extensions that in addition to not requiring gradients also do not need to access any internal representation of the PLM beyond the input embeddings. Moreover, instead of learning a single prompt, our methods learn a distribution over prompts allowing us to quantify predictive uncertainty. Ours is the first work to consider uncertainty in prompts when only having API access to the PLM. Finally, through extensive experiments, we carefully vet the proposed methods and find them competitive with (and sometimes even improving on) gradient-based approaches with full access to the PLM.
引用
收藏
页码:2416 / 2429
页数:14
相关论文
共 52 条
[1]  
Arora U, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P10687
[2]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, 10.48550/arXiv.1409.0473, DOI 10.48550/ARXIV.1409.0473]
[3]  
Beaumont MA, 2002, GENETICS, V162, P2025
[4]   Adaptive approximate Bayesian computation [J].
Beaumont, Mark A. ;
Cornuet, Jean-Marie ;
Marin, Jean-Michel ;
Robert, Christian P. .
BIOMETRIKA, 2009, 96 (04) :983-990
[5]   Sequential Monte Carlo with Adaptive Weights for Approximate Bayesian Computation [J].
Bonassi, Fernando V. ;
West, Mike .
BAYESIAN ANALYSIS, 2015, 10 (01) :171-187
[6]  
Brown TB, 2020, ADV NEUR IN, V33
[7]   The frontier of simulation-based inference [J].
Cranmer, Kyle ;
Brehmer, Johann ;
Louppe, Gilles .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (48) :30055-30062
[8]  
Desai S, 2020, Arxiv, DOI arXiv:2003.07892
[9]  
Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
[10]  
Dolan Bill, 2005, P 3 INT WORKSHOP PAR