TLFND: A Multimodal Fusion Model Based on Three-Level Feature Matching Distance for Fake News Detection

被引:1
|
作者
Wang, Junda [1 ,2 ]
Zheng, Jeffrey [2 ]
Yao, Shaowen [1 ]
Wang, Rui [2 ]
Du, Hong [2 ]
Li, Wei
机构
[1] Yunnan Univ, Engn Res Ctr Cyberspace, Kunming 650091, Peoples R China
[2] Yunnan Univ, Sch Software, Kunming 650091, Peoples R China
关键词
fake news detection; deep learning; matching distance; multimodal integrated detection;
D O I
10.3390/e25111533
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
In the rapidly evolving information era, the dissemination of information has become swifter and more extensive. Fake news, in particular, spreads more rapidly and is produced at a lower cost compared to genuine news. While researchers have developed various methods for the automated detection of fake news, challenges such as the presence of multimodal information in news articles or insufficient multimodal data have hindered their detection efficacy. To address these challenges, we introduce a novel multimodal fusion model (TLFND) based on a three-level feature matching distance approach for fake news detection. TLFND comprises four core components: a two-level text feature extraction module, an image extraction and fusion module, a three-level feature matching score module, and a multimodal integrated recognition module. This model seamlessly combines two levels of text information (headline and body) and image data (multi-image fusion) within news articles. Notably, we introduce the Chebyshev distance metric for the first time to calculate matching scores among these three modalities. Additionally, we design an adaptive evolutionary algorithm for computing the loss functions of the four model components. Our comprehensive experiments on three real-world publicly available datasets validate the effectiveness of our proposed model, with remarkable improvements demonstrated across all four evaluation metrics for the PolitiFact, GossipCop, and Twitter datasets, resulting in an F1 score increase of 6.6%, 2.9%, and 2.3%, respectively.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Multimodal Feature Adaptive Fusion for Fake News Detection
    Wang, Teng
    Zhang, Dawei
    Wang, Liqin
    Dong, Yongfeng
    Computer Engineering and Applications, 2024, 60 (13) : 102 - 111
  • [2] FMC: Multimodal fake news detection based on multi-granularity feature fusion and contrastive learning
    Yan, Facheng
    Zhang, Mingshu
    Wei, Bin
    Ren, Kelan
    Jiang, Wen
    ALEXANDRIA ENGINEERING JOURNAL, 2024, 109 : 376 - 393
  • [3] Multimodal Fake News Detection Based on Contrastive Learning and Similarity Fusion
    Li, Yan
    Jia, Kaidi
    Wang, Qiyuan
    IEEE ACCESS, 2024, 12 : 155351 - 155364
  • [4] Multimodal Data Fusion Framework For Fake News Detection
    Athira, A. B.
    Tiwari, Abhishek
    Kumar, S. D. Madhu
    Chacko, Anu Mary
    2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [5] QMFND: A quantum multimodal fusion-based fake news detection model for social media
    Qu, Zhiguo
    Meng, Yunyi
    Muhammad, Ghulam
    Tiwari, Prayag
    INFORMATION FUSION, 2024, 104
  • [6] Towards a soft three-level voting model (Soft T-LVM) for fake news detection
    Boutheina Jlifi
    Chayma Sakrani
    Claude Duvallet
    Journal of Intelligent Information Systems, 2023, 61 : 249 - 269
  • [7] Towards a soft three-level voting model (Soft T-LVM) for fake news detection
    Jlifi, Boutheina
    Sakrani, Chayma
    Duvallet, Claude
    JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2023, 61 (01) : 249 - 269
  • [8] Text-image multimodal fusion model for enhanced fake news detection
    Lin, Szu-Yin
    Chen, Yen-Chiu
    Chang, Yu-Han
    Lo, Shih-Hsin
    Chao, Kuo-Ming
    SCIENCE PROGRESS, 2024, 107 (04)
  • [9] A mutual attention based multimodal fusion for fake news detection on social network
    Guo, Ying
    APPLIED INTELLIGENCE, 2023, 53 (12) : 15311 - 15320
  • [10] A mutual attention based multimodal fusion for fake news detection on social network
    Ying Guo
    Applied Intelligence, 2023, 53 : 15311 - 15320