MERGEDISTILL: Merging Pre-trained Language Models using Distillation

被引:0
作者
Khanuja, Simran [1 ]
Johnson, Melvin [2 ]
Talukdar, Partha [1 ]
机构
[1] Google, Bangalore, Karnataka, India
[2] Google, Mountain View, CA 94043 USA
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021 | 2021年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pre-trained multilingual language models (LMs) have achieved state-of-the-art results in cross-lingual transfer, but they often lead to an inequitable representation of languages due to limited capacity, skewed pre-training data, and sub-optimal vocabularies. This has prompted the creation of an ever-growing pre-trained model universe, where each model is trained on large amounts of language or domain specific data with a carefully curated, linguistically informed vocabulary. However, doing so brings us back full circle and prevents one from leveraging the benefits of multilinguality. To address the gaps at both ends of the spectrum, we propose MERGEDISTILL, a framework to merge pre-trained LMs in a way that can best leverage their assets with minimal dependencies, using task-agnostic knowledge distillation. We demonstrate the applicability of our framework in a practical setting by leveraging pre-existing teacher LMs and training student LMs that perform competitively with or even outperform teacher LMs trained on several orders of magnitude more data and with a fixed model capacity. We also highlight the importance of teacher selection and its impact on student model performance.
引用
收藏
页码:2874 / 2887
页数:14
相关论文
共 50 条
[31]   From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Models to Pre-trained Machine Reader [J].
Xu, Weiwen ;
Li, Xin ;
Zhang, Wenxuan ;
Zhou, Meng ;
Lam, Wai ;
Si, Luo ;
Bing, Lidong .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
[32]   Pre-trained models for natural language processing: A survey [J].
Qiu XiPeng ;
Sun TianXiang ;
Xu YiGe ;
Shao YunFan ;
Dai Ning ;
Huang XuanJing .
SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2020, 63 (10) :1872-1897
[33]   Probing Pre-Trained Language Models for Disease Knowledge [J].
Alghanmi, Israa ;
Espinosa-Anke, Luis ;
Schockaert, Steven .
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, :3023-3033
[34]   Analyzing Individual Neurons in Pre-trained Language Models [J].
Durrani, Nadir ;
Sajjad, Hassan ;
Dalvi, Fahim ;
Belinkov, Yonatan .
PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, :4865-4880
[35]   Leveraging Pre-trained Language Models for Gender Debiasing [J].
Jain, Nishtha ;
Popovic, Maja ;
Groves, Declan ;
Specia, Lucia .
LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, :2188-2195
[36]   InA: Inhibition Adaption on pre-trained language models [J].
Kang, Cheng ;
Prokop, Jindrich ;
Tong, Lei ;
Zhou, Huiyu ;
Hu, Yong ;
Novak, Daniel .
NEURAL NETWORKS, 2024, 178
[37]   Impact of Morphological Segmentation on Pre-trained Language Models [J].
Westhelle, Matheus ;
Bencke, Luciana ;
Moreira, Viviane P. .
INTELLIGENT SYSTEMS, PT II, 2022, 13654 :402-416
[38]   Prompt Tuning for Discriminative Pre-trained Language Models [J].
Yao, Yuan ;
Dong, Bowen ;
Zhang, Ao ;
Zhang, Zhengyan ;
Xie, Ruobing ;
Liu, Zhiyuan ;
Lin, Leyu ;
Sun, Maosong ;
Wang, Jianyong .
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, :3468-3473
[39]   A Close Look into the Calibration of Pre-trained Language Models [J].
Chen, Yangyi ;
Yuan, Lifan ;
Cui, Ganqu ;
Liu, Zhiyuan ;
Ji, Heng .
PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, :1343-1367
[40]   Deep Entity Matching with Pre-Trained Language Models [J].
Li, Yuliang ;
Li, Jinfeng ;
Suhara, Yoshihiko ;
Doan, AnHai ;
Tan, Wang-Chiew .
PROCEEDINGS OF THE VLDB ENDOWMENT, 2020, 14 (01) :50-60