Meta-Learning Online Adaptation of Language Models

被引:0
作者
Hu, Nathan [1 ]
Mitchell, Eric [1 ]
Manning, Christopher D. [1 ]
Finn, Chelsea [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
来源
2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models encode impressively broad world knowledge in their parameters. However, the knowledge in static language models falls out of date, limiting the model's effective "shelf life." While online fine-tuning can reduce this degradation, we find that naively fine-tuning on a stream of documents leads to a low level of information uptake. We hypothesize that online fine-tuning does not sufficiently attend to important information. That is, the gradient signal from important tokens representing factual information is drowned out by the gradient from inherently noisy tokens, suggesting that a dynamic, context-aware learning rate may be beneficial. We therefore propose learning which tokens to upweight. We meta-train a small, autoregressive model to reweight the language modeling loss for each token during online fine-tuning, with the objective of maximizing the out-of-date base question-answering model's ability to answer questions about a document after a single weighted gradient step. We call this approach Context-aware Meta-learned Loss Scaling (CaMeLS). Across three different distributions of documents, our experiments find that CaMeLS provides substantially improved information uptake on streams of thousands of documents compared with standard fine-tuning and baseline heuristics for reweighting token losses.
引用
收藏
页码:4418 / 4432
页数:15
相关论文
共 45 条
  • [1] [Anonymous], 1986, Introduction to Modern Information Retrieval
  • [2] Black Sid, 2021, Gpt-neo: Large scale autoregressive language modeling with mesh-tensorflow
  • [3] Bubeck Sebastien, 2023, arXiv
  • [4] Chaudhry A., 2019, INT C LEARNING REPRE
  • [5] Chowdhery Aakanksha, 2022, arXiv
  • [6] Clark Kevin, 2022, P 2022 C EMP METH NA, P9751
  • [7] Time-Aware Language Models as Temporal Knowledge Bases
    Dhingra, Bhuwan
    Cole, Jeremy R. R.
    Eisenschlos, Julian Martin
    Gillick, Daniel
    Eisenstein, Jacob
    Cohen, William W. W.
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 257 - 273
  • [8] Dohare Shibhansh, 2022, CONTINUAL BACKPROP S
  • [9] Driess D, 2023, ARXIV
  • [10] Gerganov Georgi, 2023, LLAM CPP