Multi-Modal Military Event Extraction Based on Knowledge Fusion

被引:0
|
作者
Xiang, Yuyuan [1 ]
Jia, Yangli [1 ]
Zhang, Xiangliang [1 ]
Zhang, Zhenling [1 ]
机构
[1] Liaocheng Univ, Sch Comp Sci, Liaocheng 252059, Peoples R China
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2023年 / 77卷 / 01期
基金
中国国家自然科学基金;
关键词
Event extraction; multi-modal; knowledge fusion; pre-trained models;
D O I
10.32604/cmc.2023.040751
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Event extraction stands as a significant endeavor within the realm of information extraction, aspiring to automatically extract structured event information from vast volumes of unstructured text. Extracting event elements from multi-modal data remains a challenging task due to the presence of a large number of images and overlapping event elements in the data. Although researchers have proposed various methods to accomplish this task, most existing event extraction models cannot address these challenges because they are only applicable to text scenarios. To solve the above issues, this paper proposes a multi-modal event extraction method based on knowledge fusion. Specifically, for event-type recognition, we use a meticulous pipeline approach that integrates multiple pre-trained models. This approach enables a more comprehensive capture of the multidimensional event semantic features present in military texts, thereby enhancing the interconnectedness of information between trigger words and events. For event element extraction, we propose a method for constructing a priori templates that combine event types with corresponding trigger words. This approach facilitates the acquisition of fine-grained input samples containing event trigger words, thus enabling the model to understand the semantic relationships between elements in greater depth. Furthermore, a fusion method for spatial mapping of textual event elements and image elements is proposed to reduce the category number overload and effectively achieve multi-modal knowledge fusion. The experimental results based on the CCKS 2022 dataset show that our method has achieved competitive results, with a comprehensive evaluation value F1-score of 53.4% for the model. These results validate the effectiveness of our method in extracting event elements from multi-modal data.
引用
收藏
页码:97 / 114
页数:18
相关论文
共 50 条
  • [41] Multi-modal Perception Fusion Method Based on Cross Attention
    Zhang B.-L.
    Pan Z.-H.
    Jiang J.-Z.
    Zhang C.-B.
    Wang Y.-X.
    Yang C.-L.
    Zhongguo Gonglu Xuebao/China Journal of Highway and Transport, 2024, 37 (03): : 181 - 193
  • [42] Intensity gradient based registration and fusion of multi-modal images
    Haber, E.
    Modersitzki, J.
    METHODS OF INFORMATION IN MEDICINE, 2007, 46 (03) : 292 - 299
  • [43] Evaluation Method of Teaching Styles Based on Multi-modal Fusion
    Tang, Wen
    Wang, Chongwen
    Zhang, Yi
    2021 THE 7TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION PROCESSING, ICCIP 2021, 2021, : 9 - 15
  • [44] ART-based fusion of multi-modal perception for robots
    Berghoefer, Elmar
    Schulze, Denis
    Rauch, Christian
    Tscherepanow, Marko
    Koehler, Tim
    Wachsmuth, Sven
    NEUROCOMPUTING, 2013, 107 : 11 - 22
  • [45] Fabric image retrieval based on multi-modal feature fusion
    Ning Zhang
    Yixin Liu
    Zhongjian Li
    Jun Xiang
    Ruru Pan
    Signal, Image and Video Processing, 2024, 18 : 2207 - 2217
  • [46] News video classification based on multi-modal information fusion
    Lie, WN
    Su, CK
    2005 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), VOLS 1-5, 2005, : 1021 - 1024
  • [47] Fabric image retrieval based on multi-modal feature fusion
    Zhang, Ning
    Liu, Yixin
    Li, Zhongjian
    Xiang, Jun
    Pan, Ruru
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (03) : 2207 - 2217
  • [48] Designing Multi-Modal Embedding Fusion-Based Recommender
    Wroblewska, Anna
    Dabrowski, Jacek
    Pastuszak, Michal
    Michalowski, Andrzej
    Daniluk, Michal
    Rychalska, Barbara
    Wieczorek, Mikolaj
    Sysko-Romanczuk, Sylwia
    ELECTRONICS, 2022, 11 (09)
  • [49] Multi-Modal Fusion Technology Based on Vehicle Information: A Survey
    Zhang, Xinyu
    Gong, Yan
    Lu, Jianli
    Wu, Jiayi
    Li, Zhiwei
    Jin, Dafeng
    Li, Jun
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (06): : 3605 - 3619
  • [50] Multi-Modal Fusion Emotion Recognition Based on HMM and ANN
    Xu, Chao
    Cao, Tianyi
    Feng, Zhiyong
    Dong, Caichao
    CONTEMPORARY RESEARCH ON E-BUSINESS TECHNOLOGY AND STRATEGY, 2012, 332 : 541 - 550