KAN: Knowledge-aware Attention Network for Fake News Detection

被引:0
作者
Dun, Yaqian [1 ,2 ]
Tu, Kefei [1 ,2 ]
Chen, Chen [1 ,2 ]
Hou, Chunyan [3 ]
Yuan, Xiaojie [1 ,2 ]
机构
[1] Nankai Univ, Coll Comp Sci, Tianjin, Peoples R China
[2] Tianjin Key Lab Network & Data Secur Technol, Tianjin, Peoples R China
[3] Tianjin Univ Technol, Sch Comp Sci & Engn, Tianjin, Peoples R China
来源
THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2021年 / 35卷
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The explosive growth of fake news on social media has drawn great concern both from industrial and academic communities. There has been an increasing demand for fake news detection due to its detrimental effects. Generally, news content is condensed and full of knowledge entities. However, existing methods usually focus on the textual contents and social context, and ignore the knowledge-level relationships among news entities. To address this limitation, in this paper, we propose a novel Knowledge-aware Attention Network (KAN) that incorporates external knowledge from knowledge graph for fake news detection. Firstly, we identify entity mentions in news contents and align them with the entities in knowledge graph. Then, the entities and their contexts are used as external knowledge to provide complementary information. Finally, we design News towards Entities (N-E) attention and News towards Entities and Entity Contexts (N-(EC)-C-2) attention to measure the importances of knowledge. Thus, our proposed model can incorporate both semantic-level and knowledge-level representations of news to detect fake news. Experimental results on three public datasets show that our model outperforms the state-of-the-art methods, and also validate the effectiveness of knowledge attention.
引用
收藏
页码:81 / 89
页数:9
相关论文
共 38 条
[1]  
[Anonymous], 2014, Transactions of the Association for Computational Linguistics, DOI DOI 10.1162/TACLA00179
[2]  
Ba L. J., 2016, LAYER NORMALIZATION LAYER NORMALIZATION
[3]  
Bahdanau Dzmitry, 2015, P 3 INT C LEARN REPR
[4]  
Bollacker K., 2008, P 2008 ACM SIGMOD IN, P1247
[5]  
Castillo C., 2011, P 20 INT C WORLD WID, P675, DOI [10.1145/1963405.1963500, DOI 10.1145/1963405.1963500]
[6]   Short Text Entity Linking with Fine-grained Topics [J].
Chen, Lihan ;
Liang, Jiaqing ;
Xie, Chenhao ;
Xiao, Yanghua .
CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, :457-466
[7]  
Dong L, 2015, PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1, P260
[8]  
Du HZ, 2018, INT CONF SYST INFORM, P1170, DOI 10.1109/ICSAI.2018.8599366
[9]  
Feng S., 2012, P 50 ANN M ASS COMP, P171
[10]  
Ferragina P., 2010, P 19 ACM INT C INF K, P1625