TA-BiLSTM: An Interpretable Topic-Aware Model for Misleading Information Detection in Mobile Social Networks

被引:1
作者
Chang, Shuyu [1 ]
Wang, Rui [1 ]
Huang, Haiping [2 ]
Luo, Jian [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Comp Sci, Nanjing 210003, Jiangsu, Peoples R China
[2] Jiangsu High Technol Res Key Lab Wireless Sensor, Nanjing 210003, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Misleading information detection; Deep representation learning; Neural topic model; Attention mechanism; Mobile social networks;
D O I
10.1007/s11036-021-01847-w
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As essential information acquisition tools in our lives, mobile social networks have brought us great convenience for communication. However, misleading information such as spam emails, clickbait links, and false health information appears everywhere in mobile social networks. Prior studies have adopted various approaches to detecting this information but ignored global semantic features of the corpus and lacked interpretability. In this paper, we propose a novel end-to-end model called Topic-Aware BiLSTM (TA-BiLSTM) to handle the problems above. We firstly design a neural topic model for mining global semantic patterns, which encodes word relatedness into topic embeddings. Simultaneously, a detection model extracts local hidden states from text content with LSTM layers. Then, the model fuses those global and local representations with the Topic-Aware attention mechanism and performs misleading information detection. Experiments on three real datasets prove that the TA-BiLSTM could generate more coherent topics and improve the detecting performance jointly. Furthermore, case study and visualization demonstrate that the proposed TA-BiLSTM could discover latent topics and help in enhancing interpretability.
引用
收藏
页码:2298 / 2314
页数:17
相关论文
共 50 条
  • [1] Agrawal A, 2016, PROCEEDINGS ON 2016 2ND INTERNATIONAL CONFERENCE ON NEXT GENERATION COMPUTING TECHNOLOGIES (NGCT), P268, DOI 10.1109/NGCT.2016.7877426
  • [2] [Anonymous], 2007, 16 TEXT RETRIEVAL C
  • [3] Long short-term memory
    Hochreiter, S
    Schmidhuber, J
    [J]. NEURAL COMPUTATION, 1997, 9 (08) : 1735 - 1780
  • [4] [Anonymous], 2006, P 3 C EM ANT CEAS 20
  • [5] Biyani P, 2016, AAAI CONF ARTIF INTE, P94
  • [6] Latent Dirichlet allocation
    Blei, DM
    Ng, AY
    Jordan, MI
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) : 993 - 1022
  • [7] Bojanowski P., 2017, T ASSOC COMPUT LING, V5, P135, DOI [10.1162/tacl_a_00051, DOI 10.1162/TACLA00051]
  • [8] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [9] Chakraborty A, 2016, PROCEEDINGS OF THE 2016 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING ASONAM 2016, P9, DOI 10.1109/ASONAM.2016.7752207
  • [10] Chang C.-C., 2011, ACM T INTEL SYST TEC, V2, P1, DOI DOI 10.1145/1961189.1961199