A behavioural analysis of credulous Twitter users

被引:0
作者
Balestrucci A. [1 ,2 ]
De Nicola R. [2 ,3 ]
Petrocchi M. [2 ,4 ]
Trubiani C. [1 ]
机构
[1] Gran Sasso Science Institute, via M. Iacobucci 2, L'Aquila
[2] IMT School for Advanced Studies Lucca, Piazza San Francesco 19, Lucca
[3] CINI Cybersecurity Lab, Via Ariosto, 25, Roma
[4] Istituto di Informatica e Telematica - CNR, Via G. Moruzzi 1, Pisa
来源
Online Social Networks and Media | 2021年 / 23卷
基金
欧盟地平线“2020”;
关键词
Credulous users; Disinformation spreading; Features analysis; Online behavioural analysis; Twitter;
D O I
10.1016/j.osnem.2021.100133
中图分类号
学科分类号
摘要
Thanks to platforms such as Twitter and Facebook, people can know facts and events that otherwise would have been silenced. However, social media significantly contribute also to fast spreading biased and false news while targeting specific segments of the population. We have seen how false information can be spread using automated accounts, known as bots. Using Twitter as a benchmark, we investigate behavioural attitudes of so called ‘credulous’ users, i.e., genuine accounts following many bots. Leveraging our previous work, where supervised learning is successfully applied to single out credulous users, we improve the classification task with a detailed features’ analysis and provide evidence that simple and lightweight features are crucial to detect such users. Furthermore, we study the differences in the way credulous and not credulous users interact with bots and discover that credulous users tend to amplify more the content posted by bots and argue that their detection can be instrumental to get useful information on possible dissemination of spam content, propaganda, and, in general, little or no reliable information. © 2021 Elsevier B.V.
引用
收藏
相关论文
共 46 条
[1]  
Jackson D., Distinguishing disinformation from propaganda, misinformation and fake news, National Endowment for Democracy, (2017)
[2]  
Gangware C., Nemr W., Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age, (2019)
[3]  
Ferrara E., Varol O., Davis C.A., Menczer F., Flammini A., The rise of social bots, Commun. ACM, 59, 7, pp. 96-104, (2016)
[4]  
Cresci S., Pietro R.D., Petrocchi M., Spognardi A., Tesconi M., The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race, WWW (Companion Volume), pp. 963-972, (2017)
[5]  
Shao C., Ciampaglia G.L., Varol O., Yang K.-C., Flammini A., Menczer F., The spread of low-credibility content by social bots, Nature Commun., 9, 1, (2018)
[6]  
Luceri L., Giordano S., Ferrara E., Detecting troll behavior via inverse reinforcement learning: A case study of Russian trolls in the 2016 US election, Proceedings of the Fourteenth International AAAI Conference on Web and Social Media, pp. 417-427, (2020)
[7]  
Walton G., Cohen G., A question of belonging: Race, social fit, and achievement, J. Pers. Soc. Psychol., pp. 82-96, (2007)
[8]  
Webster D., Kruglanski A., Cognitive and social consequences of the need for cognitive closure, Eur. Rev. Soc. Psychol., 8, 1, pp. 133-173, (1997)
[9]  
Waytz A., The psychology behind fake news, Kellogg School Manag., 847, (2017)
[10]  
De keersmaecker J., Roets A., Fake news: Incorrect, but hard to correct. The role of cognitive ability on the impact of false information on social impressions, Intelligence, 65, pp. 107-110, (2017)