Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-Modal Fake News Detection

被引:6
作者
Chen, Jinyin [1 ,2 ]
Jia, Chengyu [3 ]
Zheng, Haibin [3 ]
Chen, Ruoxi [3 ]
Fu, Chenbo [1 ,2 ]
机构
[1] Zhejiang Univ Technol, Inst Cyberspace Secur, Hangzhou 310023, Peoples R China
[2] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou 310023, Peoples R China
[3] Zhejiang Univ Technol, Hangzhou 310023, Peoples R China
来源
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING | 2023年 / 10卷 / 06期
基金
中国国家自然科学基金;
关键词
Detectors; Fake news; Robustness; Social networking (online); Feature extraction; Visualization; Games; Generative adversarial networks; Adversarial attack; backdoor attack; bias evaluation; fake news detection; multi-modal; robustness evaluation;
D O I
10.1109/TNSE.2023.3249290
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The proliferation of fake news and its serious negative social influence push fake news detection methods to become necessary tools for web managers. Meanwhile, the multi-media nature of social media makes multi-modal fake news detection popular for its ability to capture more modal features than uni-modal detection methods. However, current literature on multi-modal detection is more likely to pursue the detection accuracy but ignore the robustness (the detection ability in the case of abnormality and malicious attack) of the detector. To address this problem, we propose a comprehensive robustness evaluation of multi-modal fake news detectors. In this work, we simulate the attack methods of malicious users and developers, i.e., posting fake news and injecting backdoors. Specifically, we evaluate multi-modal detectors with five adversarial and two backdoor attack methods. Experiment results imply that: (1) The detection performance of the state-of-the-art detectors degrades significantly under adversarial attacks, e.g., BDANN's detection accuracy on malicious news drops by 47% compared to normal, even worse than general detectors (Att-RNN); (2) Most multimodal detectors are more vulnerable to visual modality than textual modality; (3) Backdoor attacks on popular events news severely degrade detectors (accuracy dropped by an average of 20%); (4) These detectors degrade more (another 2% reduction in accuracy) when subjected to multi-modal attacks; (5) Defense methods will improve the robustness of multi-modal detectors, but cannot fully resist the effects of malicious attacks.
引用
收藏
页码:3144 / 3158
页数:15
相关论文
共 63 条
[1]   All Your Fake Detector are Belong to Us: Evaluating Adversarial Robustness of Fake-News Detectors Under Black-Box Settings [J].
Ali, Hassan ;
Khan, Muhammad Suleman ;
Alghadhban, Amer ;
Alazmi, Meshari ;
Alzamil, Ahmad ;
Al-Utaibi, Khaled ;
Qadir, Junaid .
IEEE ACCESS, 2021, 9 :81678-81692
[2]   An empirical approach to understanding users' fake news identification on social media [J].
Aoun Barakat, Karine ;
Dabbous, Amal ;
Tarhini, Abbas .
ONLINE INFORMATION REVIEW, 2021, 45 (06) :1080-1096
[3]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[4]  
Boididou C., 2015, Verifying Multimedia Use at MediaEval, V1436, P7
[5]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[6]  
Chen B., 2019, AAAI WORKSH
[7]   SAME: Sentiment-Aware Multi-Modal Embedding for Detecting Fake News [J].
Cui, Limeng ;
Wang, Suhang ;
Lee, Dongwon .
PROCEEDINGS OF THE 2019 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM 2019), 2019, :41-48
[8]   A Backdoor Attack Against LSTM-Based Text Classification Systems [J].
Dai, Jiazhu ;
Chen, Chuanshuai ;
Li, Yufeng .
IEEE ACCESS, 2019, 7 :138872-138878
[9]   Fairness in Deep Learning: A Computational Perspective [J].
Du, Mengnan ;
Yang, Fan ;
Zou, Na ;
Hu, Xia .
IEEE INTELLIGENT SYSTEMS, 2021, 36 (04) :25-34
[10]  
Ebrahimi J, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2, P31