Dual Channel Text Relation Extraction Based on Cross Attention

被引:0
作者
Ye, Naifu [1 ]
Yuan, Deyu [1 ,2 ]
Zhang, Zhi [1 ]
Hou, Xiaolong [3 ]
机构
[1] School of Information and Cyber Security, People's Public Security University of China, Beijing
[2] Key Laboratory of Safety Precautions and Risk Assessment, Ministry of Public Security, Beijing
[3] School of Investigation, People's Public Security University of China, Beijing
关键词
Cross Attention; Dual Channel Mechanisms; Text Relation Extraction;
D O I
10.11925/infotech.2096-3467.2023.0841
中图分类号
学科分类号
摘要
[Objective] This paper constructs a dual-channel text relation extraction model based on cross-attention to address the partial text feature issues of the existing models. The new model aims to enhance the comprehensiveness and accuracy of text relation extraction, achieving high-performance relation extraction in domain-specific datasets. [Methods] We proposed a Dual Channel Textual Relation Extraction Based on Cross Attention relation extraction model DCCAM (Dual Channel Cross Attention Model), designing a dual-channel structure that integrated sequence and graph channels. Then, we constructed a cross-attention mechanism of self-attention and gated-attention to promote the high fusion of text features and deeply examine the potential associative information. Finally, we conducted experiments on public datasets and two constructed policing datasets. [Results] Experimental results on the NYT and WebNLG public datasets showed that the DCCAM model's F1 values improved by 3% and 4% compared to the baseline model. Additionally, ablation experiments proved the effectiveness of each module in enhancing text extraction capability. Experimental results on the telecom fraud category dataset and the aiding cybercrime dataset in the police domain showed that the DCCAM model can improve the text relation extraction effectiveness in the police domain, with F1 values improving by 8.8% and 11.8% compared with the baseline model. [Limitations] We did not use large language models to explore text relation extraction techniques. [Conclusions] The DCCAM model can significantly improve the ability of text relationship extraction, demonstrating the effectiveness and practicality of text relation extraction tasks in the policing domain, and can provide text association analysis and guidance for police work. © 2024 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:114 / 125
页数:11
相关论文
共 19 条
  • [1] Wang Pengxiang, Research on Key Technologies for Constructing Domain Knowledge Graph via Deep Learning, (2023)
  • [2] Ma Jianwei, Wang Tiexin, Jiang Hong, Et al., Knowledge Extraction Based on Deep Semantics Analysis Towards Police Dossier, Journal of Computer Research and Development, 61, 5, pp. 1325-1335, (2024)
  • [3] Chen Y P, Wang K, Yang W Z, Et al., A Multi-Channel Deep Neural Network for Relation Extraction, IEEE Access, 8, pp. 13195-13203, (2020)
  • [4] Cheng Y, Yao L B, Xiang G X, Et al., Text Sentiment Orientation Analysis Based on Multi-Channel CNN and Bidirectional GRU with Attention Mechanism, IEEE Access, 8, pp. 134964-134975, (2020)
  • [5] Liang Yanchun, Fang Ailian, Chinese Text Relation Extraction Based on a Multi-Channel Convolutional Neural Network, Journal of East China Normal University (Natural Science), 3, pp. 96-104, (2021)
  • [6] Ruder S, Ghaffari P, Breslin J G., Character-Level and MultiChannel Convolutional Neural Networks for Large-Scale Authorship Attribution
  • [7] Wu Jiachang, Wu Guanmao, Relationship Extraction Method Based on Dependency Relation and Two-Channel Convolutional Neural Network, Computer Applications and Software, 36, 4, pp. 241-246, (2019)
  • [8] Zhang Yatong, Peng Dunlu, Multi-Channel Relation Extraction Model Based on Attention and Capsule Network, Journal of Chinese Computer Systems, 42, 10, pp. 2038-2043, (2021)
  • [9] Chang Chaoyi, Relation Extraction Based on Dualchannel Attention and Pre-Trained Language Model, (2021)
  • [10] Mikolov T, Sutskever I, Chen K, Et al., Distributed Representations of Words and Phrases and Their Compositionality [OL]