Connecting Pre-trained Language Models and Downstream Tasks via Properties of Representations

被引:0
作者
Wu, Chenwei [1 ]
Lee, Holden [2 ]
Ge, Rong [1 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
[2] Johns Hopkins Univ, Baltimore, MD USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, researchers have found that representations learned by large-scale pre-trained language models are useful in various downstream tasks. However, there is little theoretical understanding of how pre-training performance is related to downstream task performance. In this paper, we analyze how this performance transfer depends on the properties of the downstream task and the structure of the representations. We consider a log-linear model where a word can be predicted from its context through a network having softmax as its last layer. We show that even if the downstream task is highly structured and depends on a simple function of the hidden representation, there are still cases when a low pre-training loss cannot guarantee good performance on the downstream task. On the other hand, we propose and empirically validate the existence of an "anchor vector" in the representation space, and show that this assumption, together with properties of the downstream task, guarantees performance transfer.
引用
收藏
页数:23
相关论文
共 50 条
[41]   Leveraging Pre-trained Language Models for Gender Debiasing [J].
Jain, Nishtha ;
Popovic, Maja ;
Groves, Declan ;
Specia, Lucia .
LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, :2188-2195
[42]   Pre-trained language models for protein and molecular design [J].
Zhang, Erdong ;
Pan, Zilin ;
Yao, Zequan ;
Dong, Tiejun ;
Chen, Guanxing ;
Deng, Tingwen ;
Chen, Shiwei ;
Chen, Calvin Yu-Chian .
PHYSICAL CHEMISTRY CHEMICAL PHYSICS, 2025, 27 (27) :14189-14216
[43]   InA: Inhibition Adaption on pre-trained language models [J].
Kang, Cheng ;
Prokop, Jindrich ;
Tong, Lei ;
Zhou, Huiyu ;
Hu, Yong ;
Novak, Daniel .
NEURAL NETWORKS, 2024, 178
[44]   Impact of Morphological Segmentation on Pre-trained Language Models [J].
Westhelle, Matheus ;
Bencke, Luciana ;
Moreira, Viviane P. .
INTELLIGENT SYSTEMS, PT II, 2022, 13654 :402-416
[45]   Prompt Tuning for Discriminative Pre-trained Language Models [J].
Yao, Yuan ;
Dong, Bowen ;
Zhang, Ao ;
Zhang, Zhengyan ;
Xie, Ruobing ;
Liu, Zhiyuan ;
Lin, Leyu ;
Sun, Maosong ;
Wang, Jianyong .
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, :3468-3473
[46]   Dynamic Knowledge Distillation for Pre-trained Language Models [J].
Li, Lei ;
Lin, Yankai ;
Ren, Shuhuai ;
Li, Peng ;
Zhou, Jie ;
Sun, Xu .
2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, :379-389
[47]   Pre-trained models for natural language processing: A survey [J].
QIU XiPeng ;
SUN TianXiang ;
XU YiGe ;
SHAO YunFan ;
DAI Ning ;
HUANG XuanJing .
Science China(Technological Sciences), 2020, 63 (10) :1872-1897
[48]   Evaluating the Summarization Comprehension of Pre-Trained Language Models [J].
Chernyshev, D. I. ;
Dobrov, B. V. .
LOBACHEVSKII JOURNAL OF MATHEMATICS, 2023, 44 (08) :3028-3039
[49]   Pre-trained language models: What do they know? [J].
Guimaraes, Nuno ;
Campos, Ricardo ;
Jorge, Alipio .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 14 (01)
[50]   Empowering News Recommendation with Pre-trained Language Models [J].
Wu, Chuhan ;
Wu, Fangzhao ;
Qi, Tao ;
Huang, Yongfeng .
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, :1652-1656