Lightweight Adaptation of Neural Language Models via Subspace Embedding

被引:2
作者
Jaiswal, Amit Kumar [1 ]
Liu, Haiming [2 ]
机构
[1] Univ Surrey, Guildford, Surrey, England
[2] Univ Southampton, Southampton, Hants, England
来源
PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023 | 2023年
关键词
Word embedding; Language model; Natural language understanding;
D O I
10.1145/3583780.3615269
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traditional neural word embeddings are usually dependent on a richer diversity of vocabulary. However, the language models recline to cover major vocabularies via the word embedding parameters, in particular, for multilingual language models that generally cover a significant part of their overall learning parameters. In this work, we present a new compact embedding structure to reduce the memory footprint of the pre-trained language models with a sacrifice of up to 4% absolute accuracy. The embeddings vectors reconstruction follows a set of subspace embeddings and an assignment procedure via the contextual relationship among tokens from pre-trained language models. The subspace embedding structure(1) calibrates to masked language models, to evaluate our compact embedding structure on similarity and textual entailment tasks, sentence and paraphrase tasks. Our experimental evaluation shows that the subspace embeddings achieve compression rates beyond 99.8% in comparison with the original embeddings for the language models on XNLI and GLUE benchmark suites.
引用
收藏
页码:3968 / 3972
页数:5
相关论文
共 32 条
[1]  
[Anonymous], 2018, P BLACKBOXNLP, DOI DOI 10.1109/CIS2018.2018.00084
[2]  
Arthur D, 2007, PROCEEDINGS OF THE EIGHTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, P1027
[3]  
Bojanowski Piotr., 2017, Trans ACL, V5, P135, DOI [DOI 10.1162/TACL_A_00051, 10.1162/tacla00051, DOI 10.1162/TACLA00051, 10.1162/tacl_a_00051]
[4]  
Cer D., 2017, P 11 INT WORKSH SEM, P1, DOI [10.18653/v1/S17-2001, DOI 10.18653/V1/S17-2001]
[5]  
Chung HyungWon, 2020, INT C LEARN REPR
[6]   Canine: Pre-training an Efficient Tokenization-Free Encoder for Language Representation [J].
Clark, Jonathan H. ;
Garrette, Dan ;
Turc, Iulia ;
Wieting, John .
TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 :73-91
[7]  
Conneau A., 2020, Unsupervised Cross-lingual Representation Learning at Scale, P8440, DOI [10.18653/v1/2020.acl-main.747, DOI 10.18653/V1/2020.ACL-MAIN.747, DOI 10.48550/ARXIV.1911.02116]
[8]  
Conneau A, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P2475
[9]  
Dagan I, 2006, LECT NOTES ARTIF INT, V3944, P177
[10]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171