Enhancing hierarchical attention networks with CNN and stylistic features for fake news detection

被引:4
作者
Alghamdi, Jawaher [1 ,2 ]
Lin, Yuqing [1 ,3 ]
Luo, Suhuai [1 ]
机构
[1] Univ Newcastle, Sch Informat & Phys Sci, Newcastle 2308, Australia
[2] King Khalid Univ, Dept Comp Sci, Abha 62521, Saudi Arabia
[3] Jimei Univ, Sch Sci, Xiamen 361021, Peoples R China
关键词
Fake news detection; Attention network; Social media misinformation; Text classification;
D O I
10.1016/j.eswa.2024.125024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rise of social media platforms has led to a proliferation of false information in various forms. Identifying malicious entities on these platforms is challenging due to the complexities of natural language and the sheer volume of textual data. Compounding this difficulty is the ability of these entities to deliberately modify their writing style to make false information appear trustworthy. In this study, we propose a neural-based framework that leverages the hierarchical structure of input text to detect both fake news content and fake news spreaders. Our approach utilizes enhanced Hierarchical Convolutional Attention Networks (eHCAN), which incorporates both style-based and sentiment-based features to enhance model performance. Our results show that eHCAN outperforms several strong baseline methods, highlighting the effectiveness of integrating deep learning (DL) with stylistic features. Additionally, the framework uses attention weights to identify the most critical words and sentences, providing a clear explanation for the model's predictions. eHCAN not only demonstrates exceptional performance but also offers robust evidence to support its predictions.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Fake News Detection on Fake.Br Using Hierarchical Attention Networks
    Okano, Emerson Yoshiaki
    Liu, Zebin
    Ji, Donghong
    Ruiz, Evandro Eduardo Seron
    COMPUTATIONAL PROCESSING OF THE PORTUGUESE LANGUAGE, PROPOR 2020, 2020, 12037 : 143 - 152
  • [2] Hierarchical Co-Attention Selection Network for Interpretable Fake News Detection
    Ge, Xiaoyi
    Hao, Shuai
    Li, Yuxiao
    Wei, Bin
    Zhang, Mingshu
    BIG DATA AND COGNITIVE COMPUTING, 2022, 6 (03)
  • [3] Hierarchical Multi-modal Contextual Attention Network for Fake News Detection
    Qian, Shengsheng
    Wang, Jinguang
    Hu, Jun
    Fang, Quan
    Xu, Changsheng
    SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 153 - 162
  • [4] Multi-Domain Fake News Detection Based on Serial Attention Networks
    Qiu, Chongfeng
    PROCEEDINGS OF 2024 3RD INTERNATIONAL CONFERENCE ON CYBER SECURITY, ARTIFICIAL INTELLIGENCE AND DIGITAL ECONOMY, CSAIDE 2024, 2024, : 91 - 96
  • [5] KAHAN: Knowledge-Aware Hierarchical Attention Network for Fake News detection on Social Media
    Tseng, Yu-Wun
    Yang, Hui-Kuo
    Wang, Wei-Yao
    Peng, Wen-Chih
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 868 - 875
  • [6] MVAN: Multi-View Attention Networks for Fake News Detection on Social Media
    Ni, Shiwen
    Li, Jiawen
    Kao, Hung-Yu
    IEEE ACCESS, 2021, 9 : 106907 - 106917
  • [7] Attention-Based Deep Learning Models for Detection of Fake News in Social Networks
    Ramya S.P.
    Eswari R.
    International Journal of Cognitive Informatics and Natural Intelligence, 2021, 15 (04)
  • [8] Learning Contextual Features with Multi-head Self-attention for Fake News Detection
    Wang, Yangqian
    Han, Hao
    Ding, Ye
    Wang, Xuan
    Liao, Qing
    COGNITIVE COMPUTING - ICCC 2019, 2019, 11518 : 132 - 142
  • [9] Evaluating the effectiveness of publishers' features in fake news detection on social media
    Jarrahi, Ali
    Safari, Leila
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (02) : 2913 - 2939
  • [10] Leveraging Diversity-Aware Context Attention Networks for Fake News Detection on Social Platforms
    Chen, Zhikai
    Wu, Peng
    Pan, Li
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,