Multi-Modal Military Event Extraction Based on Knowledge Fusion

被引:0
|
作者
Xiang, Yuyuan [1 ]
Jia, Yangli [1 ]
Zhang, Xiangliang [1 ]
Zhang, Zhenling [1 ]
机构
[1] Liaocheng Univ, Sch Comp Sci, Liaocheng 252059, Peoples R China
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2023年 / 77卷 / 01期
基金
中国国家自然科学基金;
关键词
Event extraction; multi-modal; knowledge fusion; pre-trained models;
D O I
10.32604/cmc.2023.040751
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Event extraction stands as a significant endeavor within the realm of information extraction, aspiring to automatically extract structured event information from vast volumes of unstructured text. Extracting event elements from multi-modal data remains a challenging task due to the presence of a large number of images and overlapping event elements in the data. Although researchers have proposed various methods to accomplish this task, most existing event extraction models cannot address these challenges because they are only applicable to text scenarios. To solve the above issues, this paper proposes a multi-modal event extraction method based on knowledge fusion. Specifically, for event-type recognition, we use a meticulous pipeline approach that integrates multiple pre-trained models. This approach enables a more comprehensive capture of the multidimensional event semantic features present in military texts, thereby enhancing the interconnectedness of information between trigger words and events. For event element extraction, we propose a method for constructing a priori templates that combine event types with corresponding trigger words. This approach facilitates the acquisition of fine-grained input samples containing event trigger words, thus enabling the model to understand the semantic relationships between elements in greater depth. Furthermore, a fusion method for spatial mapping of textual event elements and image elements is proposed to reduce the category number overload and effectively achieve multi-modal knowledge fusion. The experimental results based on the CCKS 2022 dataset show that our method has achieved competitive results, with a comprehensive evaluation value F1-score of 53.4% for the model. These results validate the effectiveness of our method in extracting event elements from multi-modal data.
引用
收藏
页码:97 / 114
页数:18
相关论文
共 50 条
  • [1] MKER: multi-modal knowledge extraction and reasoning for future event prediction
    Lai, Chenghang
    Qiu, Shoumeng
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (02)
  • [2] Representation and Fusion Based on Knowledge Graph in Multi-Modal Semantic Communication
    Xing, Chenlin
    Lv, Jie
    Luo, Tao
    Zhang, Zhilong
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2024, 13 (05) : 1344 - 1348
  • [3] Multi-modal news event detection with external knowledge
    Lin, Zehang
    Xie, Jiayuan
    Li, Qing
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (03)
  • [4] Multi-modal video event recognition based on association rules and decision fusion
    Guder, Mennan
    Cicekli, Nihan Kesim
    MULTIMEDIA SYSTEMS, 2018, 24 (01) : 55 - 72
  • [5] Multi-modal video event recognition based on association rules and decision fusion
    Mennan Güder
    Nihan Kesim Çiçekli
    Multimedia Systems, 2018, 24 : 55 - 72
  • [6] Combining Knowledge and Multi-modal Fusion for Meme Classification
    Zhong, Qi
    Wang, Qian
    Liu, Ji
    MULTIMEDIA MODELING (MMM 2022), PT I, 2022, 13141 : 599 - 611
  • [7] Multi-modal Fusion
    Liu, Huaping
    Hussain, Amir
    Wang, Shuliang
    INFORMATION SCIENCES, 2018, 432 : 462 - 462
  • [8] Knowledge-Based Topic Model for Multi-Modal Social Event Analysis
    Xue, Feng
    Hong, Richang
    He, Xiangnan
    Wang, Jianwei
    Qian, Shengsheng
    Xu, Changsheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (08) : 2098 - 2110
  • [9] A multi-modal discrete-event simulation model for military deployment
    Yidirim, Ugur Z.
    Tansel, Barbaros C.
    Sabuncuoglu, Ihsan
    SIMULATION MODELLING PRACTICE AND THEORY, 2009, 17 (04) : 597 - 611
  • [10] Multi-modal Information Extraction and Fusion with Convolutional Neural Networks
    Kumar, Dinesh
    Sharma, Dharmendra
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,