Deep visual-linguistic fusion network considering cross-modal inconsistency for rumor detection

被引:4
|
作者
Yang, Yang [1 ]
Bao, Ran [2 ]
Guo, Weili [1 ]
Zhan, De-Chuan [2 ]
Yin, Yilong [3 ]
Yang, Jian [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
[2] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
[3] Shandong Univ, Sch Software, Jinan 250101, Shandong, Peoples R China
基金
中国国家自然科学基金;
关键词
multimodal learning; Wasserstein distance; rumor detection; FEATURES;
D O I
10.1007/s11432-021-3530-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the development of the Internet, users can freely publish posts on various social media platforms, which offers great convenience for keeping abreast of the world. However, posts usually carry many rumors, which require plenty of manpower for monitoring. Owing to the success of modern machine learning techniques, especially deep learning models, we tried to detect rumors as a classification problem automatically. Early attempts have always focused on building classifiers relying on image or text information, i.e., single modality in posts. Thereafter, several multimodal detection approaches employ an early or late fusion operator for aggregating multiple source information. Nevertheless, they only take advantage of multimodal embeddings for fusion and ignore another important detection factor, i.e., the intermodal inconsistency between modalities. To solve this problem, we develop a novel deep visual-linguistic fusion network (DVLFN) considering cross-modal inconsistency, which detects rumors by comprehensively considering modal aggregation and contrast information. Specifically, the DVLFN first utilizes visual and textual deep encoders, i.e., Faster R-CNN and bidirectional encoder representations from transformers, to extract global and regional embeddings for image and text modalities. Then, it predicts posts' authenticity from two aspects: (1) intermodal inconsistency, which employs the Wasserstein distance to efficiently measure the similarity between regional embeddings of different modalities, and (2) modal aggregation, which experimentally employs the early fusion to aggregate two modal embeddings for prediction. Consequently, the DVLFN can compose the final prediction based on the modal fusion and inconsistency measure. Experiments are conducted on three real-world multimedia rumor detection datasets collected from Reddit, GoodNews, and Weibo. The results validate the superior performance of the proposed DVLFN.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Deep visual-linguistic fusion network considering cross-modal inconsistency for rumor detection
    Yang YANG
    Ran BAO
    Weili GUO
    De-Chuan ZHAN
    Yilong YIN
    Jian YANG
    Science China(Information Sciences), 2023, 66 (12) : 16 - 32
  • [2] Cross-Modal Rumor Detection Based on Adversarial Neural Network
    Jiana, Meng
    Xiaopei, Wang
    Ting, Li
    Shuang, Liu
    Di, Zhao
    Data Analysis and Knowledge Discovery, 2022, 6 (12) : 32 - 42
  • [3] A Weighted Cross-Modal Feature Aggregation Network for Rumor Detection
    Li, Jia
    Hu, Zihan
    Yang, Zhenguo
    Lee, Lap-Kei
    Wang, Fu Lee
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PT VI, PAKDD 2024, 2024, 14650 : 42 - 53
  • [4] Cross-modal Attention Network with Orthogonal Latent Memory for Rumor Detection
    Wu, Zekai
    Chen, Jiaxin
    Yang, Zhenguo
    Xie, Haoran
    Wang, Fu Lee
    Liu, Wenyin
    WEB INFORMATION SYSTEMS ENGINEERING - WISE 2021, PT I, 2021, 13080 : 527 - 541
  • [5] Deception and deception detection: The role of cross-modal inconsistency
    Heinrich, CU
    Borkenau, P
    JOURNAL OF PERSONALITY, 1998, 66 (05) : 687 - 712
  • [6] Bilateral Cross-Modal Fusion Network for Robot Grasp Detection
    Zhang, Qiang
    Sun, Xueying
    SENSORS, 2023, 23 (06)
  • [7] Unsupervised Deep Fusion Cross-modal Hashing
    Huang, Jiaming
    Min, Chen
    Jing, Liping
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 358 - 366
  • [8] Enhancing Stock Price Prediction with Deep Cross-Modal Information Fusion Network
    Mandal, Rabi Chandra
    Kler, Rajnish
    Tiwari, Anil
    Keshta, Ismail
    Abonazel, Mohamed R.
    Tageldin, Elsayed M.
    Umaralievich, Mekhmonov Sultonali
    FLUCTUATION AND NOISE LETTERS, 2024, 23 (02):
  • [9] DCMFNet: Deep Cross-Modal Fusion Network for Referring Image Segmentation with Iterative Gated Fusion
    Huang, Zhen
    Xue, Mingcheng
    Liu, Yu
    Xu, Kaiping
    Li, Jiangquan
    Yu, Chenyang
    PROCEEDINGS OF THE 50TH GRAPHICS INTERFACE CONFERENCE, GI 2024, 2024,
  • [10] Deep Memory Network for Cross-Modal Retrieval
    Song, Ge
    Wang, Dong
    Tan, Xiaoyang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (05) : 1261 - 1275