Multi-View Cooperative Learning with Invariant Rationale for Document-Level Relation Extraction

被引:0
|
作者
Lin, Rui [1 ]
Fan, Jing [2 ,3 ]
He, Yinglong [2 ]
Yang, Yehui [2 ]
Li, Jia [4 ]
Guo, Cunhan [5 ]
机构
[1] Yunnan Univ, Dept Elect Engn, Kunming 650500, Peoples R China
[2] Yunnan Minzu Univ, Univ Key Lab Informat & Commun Secur Backup & Reco, Kunming 650500, Peoples R China
[3] Educ Instruments & Facil Serv Ctr, Educ Dept Yunnan Prov, Kunming 650500, Peoples R China
[4] Henan Normal Univ, Coll Comp & Informat Engn, Xinxiang 453000, Peoples R China
[5] Univ Chinese Acad Sci, Sch Emergency Management Sci & Engn, 1,Yanqihu East Rd, Beijing 101400, Peoples R China
关键词
Natural language processing; Relation extraction; Multi-view cooperative learning; Document-level; Rationale;
D O I
10.1007/s12559-024-10322-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Document-level relation extraction (RE) is a complex and significant natural language processing task, as the massive entity pairs exist in the document and are across sentences in reality. However, the existing relation extraction methods (deep learning) often use single-view information (e.g., entity-level or sentence-level) to learn the relational information but ignore the multi-view information, and the explanations of deep learning are difficult to be reflected, although it achieves good results. To extract high-quality relational information from the document and improve the explanations of the model, we propose a multi-view cooperative learning with invariant rationale (MCLIR) framework. Firstly, we design the multi-view cooperative learning to find latent relational information from the various views. Secondly, we utilize invariant rationale to encourage the model to focus on crucial information, which can empower the performance and explanations of the model. We conduct the experiment on two public datasets, and the results of the experiment demonstrate the effectiveness of MCLIR.
引用
收藏
页码:3505 / 3517
页数:13
相关论文
共 50 条
  • [1] Survey on Document-Level Relation Extraction
    Zhou Y.
    Huang H.
    Liu H.
    Hao Z.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2022, 50 (04): : 10 - 25
  • [2] Document-level Relation Extraction with Relation Correlations
    Han, Ridong
    Peng, Tao
    Wang, Benyou
    Liu, Lu
    Tiwari, Prayag
    Wan, Xiang
    NEURAL NETWORKS, 2024, 171 : 14 - 24
  • [3] Multi-perspective context aggregation for document-level relation extraction
    Ding, Xiaoyao
    Zhou, Gang
    Zhu, Taojie
    APPLIED INTELLIGENCE, 2023, 53 (06) : 6926 - 6935
  • [4] Multi-perspective context aggregation for document-level relation extraction
    Xiaoyao Ding
    Gang Zhou
    Taojie Zhu
    Applied Intelligence, 2023, 53 : 6926 - 6935
  • [5] Document-level relation extraction with Entity-Selection Attention
    Yuan, Changsen
    Huang, Heyan
    Feng, Chong
    Shi, Ge
    Wei, Xiaochi
    INFORMATION SCIENCES, 2021, 568 : 163 - 174
  • [6] MULTI-GRANULARITY HETEROGENEOUS GRAPH FOR DOCUMENT-LEVEL RELATION EXTRACTION
    Tang, Hengzhu
    Cao, Yanan
    Zhang, Zhenyu
    Jia, Ruipeng
    Fang, Fang
    Wang, Shi
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7683 - 7687
  • [7] Document-level relation extraction with multi-layer heterogeneous graph attention network
    Wang, Nianbin
    Chen, Tiantian
    Ren, Chaoqi
    Wang, Hongbin
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 123
  • [8] Document-level relation extraction with global and path dependencies
    Jia, Wei
    Ma, Ruizhe
    Yan, Li
    Niu, Weinan
    Ma, Zongmin
    KNOWLEDGE-BASED SYSTEMS, 2024, 289
  • [9] Exploiting Ubiquitous Mentions for Document-Level Relation Extraction
    Zhang, Ruoyu
    Li, Yanzeng
    Zhang, Minhao
    Zou, Lei
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 1986 - 1990
  • [10] Automatic Graph Generation for Document-Level Relation Extraction
    Yu, Yanhua
    Shen, Fangting
    Yang, Shengli
    Li, Jie
    Wang, Yuling
    Ma, Ang
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,