Multi-Modal Military Event Extraction Based on Knowledge Fusion

被引:0
|
作者
Xiang, Yuyuan [1 ]
Jia, Yangli [1 ]
Zhang, Xiangliang [1 ]
Zhang, Zhenling [1 ]
机构
[1] Liaocheng Univ, Sch Comp Sci, Liaocheng 252059, Peoples R China
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2023年 / 77卷 / 01期
基金
中国国家自然科学基金;
关键词
Event extraction; multi-modal; knowledge fusion; pre-trained models;
D O I
10.32604/cmc.2023.040751
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Event extraction stands as a significant endeavor within the realm of information extraction, aspiring to automatically extract structured event information from vast volumes of unstructured text. Extracting event elements from multi-modal data remains a challenging task due to the presence of a large number of images and overlapping event elements in the data. Although researchers have proposed various methods to accomplish this task, most existing event extraction models cannot address these challenges because they are only applicable to text scenarios. To solve the above issues, this paper proposes a multi-modal event extraction method based on knowledge fusion. Specifically, for event-type recognition, we use a meticulous pipeline approach that integrates multiple pre-trained models. This approach enables a more comprehensive capture of the multidimensional event semantic features present in military texts, thereby enhancing the interconnectedness of information between trigger words and events. For event element extraction, we propose a method for constructing a priori templates that combine event types with corresponding trigger words. This approach facilitates the acquisition of fine-grained input samples containing event trigger words, thus enabling the model to understand the semantic relationships between elements in greater depth. Furthermore, a fusion method for spatial mapping of textual event elements and image elements is proposed to reduce the category number overload and effectively achieve multi-modal knowledge fusion. The experimental results based on the CCKS 2022 dataset show that our method has achieved competitive results, with a comprehensive evaluation value F1-score of 53.4% for the model. These results validate the effectiveness of our method in extracting event elements from multi-modal data.
引用
收藏
页码:97 / 114
页数:18
相关论文
共 50 条
  • [21] Fusion of Multi-Modal Underwater Ship Inspection Data with Knowledge Graphs
    Hirsch, Joseph
    Elvesaeter, Brian
    Cardaillac, Alexandre
    Bauer, Bernhard
    Waszak, Maryna
    2022 OCEANS HAMPTON ROADS, 2022,
  • [22] Research and Comprehensive Review on Multi-Modal Knowledge Graph Fusion Techniques
    Chen, Youren
    Li, Yong
    Wen, Ming
    Sun, Chi
    Computer Engineering and Applications, 2024, 60 (13) : 36 - 50
  • [23] Robust multi-modal fusion architecture for medical data with knowledge distillation
    Wang, Muyu
    Fan, Shiyu
    Li, Yichen
    Gao, Binyu
    Xie, Zhongrang
    Chen, Hui
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2025, 260
  • [24] Infrared thermal image ROI extraction algorithm based on fusion of multi-modal feature maps
    Zhu Li
    Zhang Jing
    Fu Ying-Kai
    Shen Hui
    Zhang Shou-Feng
    Hong Xiang-Gong
    JOURNAL OF INFRARED AND MILLIMETER WAVES, 2019, 38 (01) : 125 - 132
  • [25] Research on structural knowledge extraction and organization for multi-modal governmental documents
    Xu R.
    Geng B.
    Liu S.
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2022, 44 (07): : 2241 - 2250
  • [26] Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity Alignment
    Li, Qian
    Ji, Cheng
    Guo, Shu
    Liang, Zhaoji
    Wang, Lihong
    Li, Jianxin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 987 - 999
  • [27] Event-centric multi-modal fusion method for dense video captioning
    Chang, Zhi
    Zhao, Dexin
    Chen, Huilin
    Li, Jingdan
    Liu, Pengfei
    NEURAL NETWORKS, 2022, 146 : 120 - 129
  • [28] Medical Visual Question-Answering Model Based on Knowledge Enhancement and Multi-Modal Fusion
    Zhang, Dianyuan
    Yu, Chuanming
    An, Lu
    Proceedings of the Association for Information Science and Technology, 2024, 61 (01) : 703 - 708
  • [29] Robust indoor localization based on multi-modal information fusion and multi-scale sequential feature extraction
    Wang, Qinghu
    Jia, Jie
    Chen, Jian
    Deng, Yansha
    Wang, Xingwei
    Aghvami, Abdol Hamid
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 155 : 164 - 178
  • [30] Semantic event extraction from basketball games using multi-modal analysis
    Zhang, Yifan
    Xu, Changsheng
    Ru, Yong
    Wang, Jinqiao
    Lu, Hanqing
    2007 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-5, 2007, : 2190 - +