Dual-stream dynamic graph structure network for document-level relation extraction

被引:1
作者
Zhong, Yu
Shen, Bo [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing 100044, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Natural language processing; Graph convolutional network; Document-level relation extraction; Dynamic graph;
D O I
10.1016/j.jksuci.2024.102202
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Extracting structured information from unstructured text is crucial for knowledge management and utilization, which is the goal of document-level relation extraction. Existing graph-based methods face issues with information confusion and integration, limiting the reasoning capabilities of the model. To tackle this problem, a dual-stream dynamic graph structural network is proposed to model documents from various perspectives. Leveraging the richness of document information, a static document heterogeneous graph is constructed. A dynamic heterogeneous document graph is then induced based on this foundation to facilitate global information aggregation for entity representation learning. Additionally, the static document graph is decomposed into multi-level static semantic graphs, and multi-layer dynamic semantic graphs are further induced, explicitly segregating information from different levels. Information from different streams is effectively integrated via an information integrator. To mitigate the interference of noise during the reasoning process, a noise regularization mechanism is also designed. The experimental results on three extensively utilized publicly accessible datasets for document-level relation extraction demonstrate that our model achieves F1 scores of 62.56%, 71.1%, and 86.9% on the DocRED, CDR, and GDA datasets, respectively, significantly outperforming the baselines. Further analysis also demonstrates the effectiveness of the model in multi-entity scenarios.
引用
收藏
页数:11
相关论文
共 51 条
[1]  
Beltagy I., SciBERT: A pretrained language model for scientific text, DOI [10.48550/arXiv.1903.10676, DOI 10.48550/ARXIV.1903.10676]
[2]   CodeKGC: Code Language Model for Generative Knowledge Graph Construction [J].
Bi, Zhen ;
Chen, Jing ;
Jiang, Yinuo ;
Xiong, Feiyu ;
Guo, Wei ;
Chen, Huajun ;
Zhang, Ningyu .
ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (03)
[3]  
Cai R, 2016, PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, P756
[4]  
Christopoulou F., 2019, arXiv
[5]   PQuAD: A Persian question answering dataset [J].
Darvishi, Kasra ;
Shahbodaghkhan, Newsha ;
Abbasiantaeb, Zahra ;
Momtazi, Saeedeh .
COMPUTER SPEECH AND LANGUAGE, 2023, 80
[6]  
Devlin J., 2018, arXiv
[7]   Relational distance and document-level contrastive pre-training based relation extraction model [J].
Dong, Yihao ;
Xu, Xiaolong .
PATTERN RECOGNITION LETTERS, 2023, 167 :132-140
[8]  
Du YK, 2022, Arxiv, DOI arXiv:2205.10511
[9]   Hybrid cross-modal interaction learning for multimodal sentiment analysis [J].
Fu, Yanping ;
Zhang, Zhiyuan ;
Yang, Ruidi ;
Yao, Cuiyou .
NEUROCOMPUTING, 2024, 571
[10]  
Guo ZJ, 2020, Arxiv, DOI arXiv:1906.07510