Identifying Toxicity Within YouTube Video Comment

被引:36
作者
Obadimu, Adewale [1 ]
Mead, Esther [1 ]
Hussain, Muhammad Nihal [1 ]
Agarwal, Nitin [1 ]
机构
[1] Univ Arkansas, Little Rock, AR 72204 USA
来源
SOCIAL, CULTURAL, AND BEHAVIORAL MODELING, SBP-BRIMS 2019 | 2019年 / 11549卷
基金
美国国家科学基金会;
关键词
Social network analysis; Topic modeling; Toxicity analysis;
D O I
10.1007/978-3-030-21741-9_22
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Online Social Networks (OSNs), once regarded as safe havens for sharing information and providing mutual support among groups of people, have become breeding grounds for spreading toxic behaviors, political propaganda, and radicalizing content. Toxic individuals often hide under the auspices of anonymity to create fruitless arguments and divert the attention of other users from the core objectives of a community. In this study, we examined five recurring forms of toxicity among the comments posted on pro- and anti-NATO channels on YouTube. We leveraged the YouTube Data API to collect video and comment data from eight channels. We then utilized Google's Perspective API to assign toxic scores to each comment. Our analysis suggests that, on average, commenters on the anti-NATO channels are more likely to be more toxic than those on the pro-NATO channels. We further discovered that commenters on pro-NATO channels tend to use a mixture of toxic and innocuous comments. We generated word clouds to get an idea of word use frequency, as well as applied the Latent Dirichlet Allocation topic model to classify the comments into their overall topics. The topics extracted from the pro-NATO channels' comments were primarily positive, such as "Alliance" and "United"; whereas, the topics extracted from anti-NATO channels' comments were more geared towards geographical locations, such as "Russia", and negative components such as "Profanity" and "Fake News". By identifying and examining the toxic behaviors of commenters on YouTube, our analysis lends aid to the pressing need for understanding this toxicity.
引用
收藏
页码:214 / 223
页数:10
相关论文
共 20 条
[1]  
[Anonymous], 2004, ORA ORG RISK ANAL
[2]   Latent Dirichlet allocation [J].
Blei, DM ;
Ng, AY ;
Jordan, MI .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) :993-1022
[3]   Uncovering Large Groups of Active Malicious Accounts in Online Social Networks [J].
Cao, Qiang ;
Yang, Xiaowei ;
Yu, Jieqi ;
Palow, Christopher .
CCS'14: PROCEEDINGS OF THE 21ST ACM CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2014, :477-488
[4]   Detecting Offensive Language in Social Media to Protect Adolescent Online Safety [J].
Chen, Ying ;
Zhou, Yilu ;
Zhu, Sencun ;
Xu, Heng .
PROCEEDINGS OF 2012 ASE/IEEE INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY, RISK AND TRUST AND 2012 ASE/IEEE INTERNATIONAL CONFERENCE ON SOCIAL COMPUTING (SOCIALCOM/PASSAT 2012), 2012, :71-80
[5]  
Cheng J, ANTISOCIAL BEHAV ONL, P10
[6]   Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions [J].
Cheng, Justin ;
Bernstein, Michael ;
Danescu-Niculescu-Mizil, Cristian ;
Leskovec, Jure .
CSCW'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING, 2017, :1217-1230
[7]  
Davidson Thomas, 2017, ICWSM P 2017
[8]   All You Need is "Love": Evading Hate Speech Detection [J].
Grondahl, Tommi ;
Pajola, Luca ;
Juuti, Mika ;
Conti, Mauro ;
Asokan, N. .
AISEC'18: PROCEEDINGS OF THE 11TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, 2018, :2-12
[9]  
Kannan S, 2017, ARXIV170208138CS
[10]   Why People Post Benevolent and Malicious Comments Online [J].
Lee, So-Hyun ;
Kim, Hee-Woong .
COMMUNICATIONS OF THE ACM, 2015, 58 (11) :74-79