Linguistic signals under misinformation and fact-checking: Evidence from user comments on social media

被引:25
作者
Jiang S. [1 ]
Wilson C. [1 ]
机构
[1] Northeastern University, United States
关键词
Fact-checking; Fake news; Misinformation; Social computing; Social media;
D O I
10.1145/3274351
中图分类号
学科分类号
摘要
Misinformation and fact-checking are opposite forces in the news environment: The former creates inaccuracies to mislead people, while the latter provides evidence to rebut the former. These news articles are often posted on social media and attract user engagement in the form of comments. In this paper, we investigate linguistic (especially emotional and topical) signals expressed in user comments in the presence of misinformation and fact-checking. We collect and analyze a dataset of 5,303 social media posts with 2,614,374 user comments from Facebook, Twitter, and YouTube, and associate these posts to fact-check articles from Snopes and PolitiFact for veracity rulings (i.e., from true to false). We find that linguistic signals in user comments vary significantly with the veracity of posts, e.g., we observe more misinformation-awareness signals and extensive emoji and swear word usage with falser posts. We further show that these signals can help to detect misinformation. In addition, we find that while there are signals indicating positive effects after fact-checking, there are also signals indicating potential “backfire” effects. © 2018 Copyright held by the owner/author(s). Publication rights licensed to the Association for Computing Machinery.
引用
收藏
相关论文
共 127 条
[1]  
Allcott H., Gentzkow M., Social media and fake news in the 2016 election, Journal of Economic Perspectives, 31, 2, pp. 211-236, (2017)
[2]  
Alm C.O., Roth D., Sproat R., Emotions from text: Machine learning for text-based emotion prediction, Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, pp. 579-586, (2005)
[3]  
Amazeen M.A., Revisiting the epistemology of fact-checking, Critical Review, 27, 1, pp. 1-22, (2015)
[4]  
Amazeen M.A., Checking the fact-checkers in 2008: Predicting political ad scrutiny and assessing consistency, Journal of Political Marketing, 15, 4, pp. 433-464, (2016)
[5]  
Arif A., Robinson J.J., Stanek S.A., Fichet E.S., Townsend P., Worku Z., Starbird K., A closer look at the self-correcting crowd: Examining corrections in online rumors, Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, pp. 155-168, (2017)
[6]  
Asch S.E., Guetzkow H., Effects of group pressure upon the modification and distortion of judgments, Groups, Leadership, and Men, pp. 222-236, (1951)
[7]  
Baccianella S., Esuli A., Sebastiani F., Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2010), 10, pp. 2200-2204, (2010)
[8]  
Banko M., Cafarella M.J., Soderland S., Broadhead M., Etzioni O., Open information extraction from the web, IJCAI, 7, pp. 2670-2676, (2007)
[9]  
Becker L., Erhart G., Skiba D., Matula V., Avaya: Sentiment analysis on twitter with self-training and polarity lexicon expansion, Second Joint Conference on Lexical and Computational Semantics (SEM), 2: Proceedings of The Seventh International Workshop on Semantic Evaluation (SemEval 2013), 2, pp. 333-340, (2013)
[10]  
Berman N., The Victims of Fake News, (2017)