Learning Knowledge-Enhanced Contextual Language Representations for Domain Natural Language Understanding

被引:0
作者
Zhang, Taolin [1 ,2 ]
Xu, Ruyao [1 ]
Wang, Chengyu [2 ]
Duan, Zhongjie [1 ]
Chen, Cen [1 ]
Qiu, Minghui [2 ]
Cheng, Dawei [3 ]
He, Xiaofeng [1 ]
Qian, Weining [1 ]
机构
[1] East China Normal Univ, Shanghai, Peoples R China
[2] Alibaba Grp, Hangzhou, Peoples R China
[3] Tongji Univ, Shanghai, Peoples R China
来源
2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023) | 2023年
基金
中国国家自然科学基金;
关键词
MODEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge-Enhanced Pre-trained Language Models (KEPLMs) improve the performance of various downstream NLP tasks by injecting knowledge facts from large-scale Knowledge Graphs (KGs). However, existing methods for pre-training KEPLMs with relational triples are difficult to be adapted to close domains due to the lack of sufficient domain graph semantics. In this paper, we propose a Knowledgeenhanced lANGuAge Representation learning framework for various clOsed dOmains (KAN-GAROO) via capturing the implicit graph structure among the entities. Specifically, since the entity coverage rates of closed-domain KGs can be relatively low and may exhibit the global sparsity phenomenon for knowledge injection, we consider not only the shallow relational representations of triples but also the hyperbolic embeddings of deep hierarchical entityclass structures for effective knowledge fusion. Moreover, as two closed-domain entities under the same entity-class often have locally dense neighbor subgraphs counted by max point bi-connected component, we further propose a data augmentation strategy based on contrastive learning over subgraphs to construct hard negative samples of higher quality. It makes the underlying KELPMs better distinguish the semantics of these neighboring entities to further complement the global semantic sparsity. In the experiments, we evaluate KANGAROO over various knowledge-aware and general NLP tasks in both full and few-shot learning settings, outperforming various KEPLM training paradigms performance in closed-domains significantly.
引用
收藏
页码:15663 / 15676
页数:14
相关论文
共 46 条
[1]  
[Anonymous], 2019, CORR, DOI [DOI 10.48550/arXiv.1907.11692, 10.48550/arXiv.1907.11692]
[2]  
[Anonymous], 2008, Visualizing data using t-sne
[3]  
Ba Jimmy Lei, 2016, ARXIV
[4]  
Beltagy I, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P3615
[5]  
Bordes A., 2013, Advances in neural information processing systems, V26, P1, DOI DOI 10.5555/2999792.2999923
[6]  
Borgeaud S., 2022, PMLR, V162, P2206
[7]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[8]  
Feng Shangbin, 2021, arXiv
[9]  
Feng Shangbin, 2022, ACL
[10]  
Gao TY, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P6894