MERGEDISTILL: Merging Pre-trained Language Models using Distillation

被引:0
作者
Khanuja, Simran [1 ]
Johnson, Melvin [2 ]
Talukdar, Partha [1 ]
机构
[1] Google, Bangalore, Karnataka, India
[2] Google, Mountain View, CA 94043 USA
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021 | 2021年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pre-trained multilingual language models (LMs) have achieved state-of-the-art results in cross-lingual transfer, but they often lead to an inequitable representation of languages due to limited capacity, skewed pre-training data, and sub-optimal vocabularies. This has prompted the creation of an ever-growing pre-trained model universe, where each model is trained on large amounts of language or domain specific data with a carefully curated, linguistically informed vocabulary. However, doing so brings us back full circle and prevents one from leveraging the benefits of multilinguality. To address the gaps at both ends of the spectrum, we propose MERGEDISTILL, a framework to merge pre-trained LMs in a way that can best leverage their assets with minimal dependencies, using task-agnostic knowledge distillation. We demonstrate the applicability of our framework in a practical setting by leveraging pre-existing teacher LMs and training student LMs that perform competitively with or even outperform teacher LMs trained on several orders of magnitude more data and with a fixed model capacity. We also highlight the importance of teacher selection and its impact on student model performance.
引用
收藏
页码:2874 / 2887
页数:14
相关论文
共 50 条
[41]   Impact of Morphological Segmentation on Pre-trained Language Models [J].
Westhelle, Matheus ;
Bencke, Luciana ;
Moreira, Viviane P. .
INTELLIGENT SYSTEMS, PT II, 2022, 13654 :402-416
[42]   Prompt Tuning for Discriminative Pre-trained Language Models [J].
Yao, Yuan ;
Dong, Bowen ;
Zhang, Ao ;
Zhang, Zhengyan ;
Xie, Ruobing ;
Liu, Zhiyuan ;
Lin, Leyu ;
Sun, Maosong ;
Wang, Jianyong .
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, :3468-3473
[43]   Pre-trained models for natural language processing: A survey [J].
QIU XiPeng ;
SUN TianXiang ;
XU YiGe ;
SHAO YunFan ;
DAI Ning ;
HUANG XuanJing .
Science China(Technological Sciences), 2020, 63 (10) :1872-1897
[44]   Pre-trained language models: What do they know? [J].
Guimaraes, Nuno ;
Campos, Ricardo ;
Jorge, Alipio .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 14 (01)
[45]   Evaluating the Summarization Comprehension of Pre-Trained Language Models [J].
Chernyshev, D. I. ;
Dobrov, B. V. .
LOBACHEVSKII JOURNAL OF MATHEMATICS, 2023, 44 (08) :3028-3039
[46]   Empowering News Recommendation with Pre-trained Language Models [J].
Wu, Chuhan ;
Wu, Fangzhao ;
Qi, Tao ;
Huang, Yongfeng .
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, :1652-1656
[47]   Capturing Semantics for Imputation with Pre-trained Language Models [J].
Mei, Yinan ;
Song, Shaoxu ;
Fang, Chenguang ;
Yang, Haifeng ;
Fang, Jingyun ;
Long, Jiang .
2021 IEEE 37TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2021), 2021, :61-72
[48]   Understanding Online Attitudes with Pre-Trained Language Models [J].
Power, William ;
Obradovic, Zoran .
PROCEEDINGS OF THE 2023 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING, ASONAM 2023, 2023, :745-752
[49]   Memorisation versus Generalisation in Pre-trained Language Models [J].
Tanzer, Michael ;
Ruder, Sebastian ;
Rei, Marek .
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, :7564-7578
[50]   A Survey of Knowledge Enhanced Pre-Trained Language Models [J].
Hu, Linmei ;
Liu, Zeyi ;
Zhao, Ziwang ;
Hou, Lei ;
Nie, Liqiang ;
Li, Juanzi .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (04) :1413-1430