A joint learning approach with knowledge injection for zero-shot cross-lingual hate speech detection

被引:49
作者
Pamungkas, Endang Wahyu [1 ]
Basile, Valerio [1 ]
Patti, Viviana [1 ]
机构
[1] Univ Turin, Dept Comp Sci, Turin, Italy
关键词
Hate speech detection; Cross-lingual classification; Social media; Transfer learning; Zero-shot learning; ETHNOPHAULISMS;
D O I
10.1016/j.ipm.2021.102544
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Hate speech is an increasingly important societal issue in the era of digital communication. Hateful expressions often make use of figurative language and, although they represent, in some sense, the dark side of language, they are also often prime examples of creative use of language. While hate speech is a global phenomenon, current studies on automatic hate speech detection are typically framed in a monolingual setting. In this work, we explore hate speech detection in low-resource languages by transferring knowledge from a resource-rich language, English, in a zero-shot learning fashion. We experiment with traditional and recent neural architectures, and propose two joint-learning models, using different multilingual language representations to transfer knowledge between pairs of languages. We also evaluate the impact of additional knowledge in our experiment, by incorporating information from a multilingual lexicon of abusive words. The results show that our joint-learning models achieve the best performance on most languages. However, a simple approach that uses machine translation and a pre-trained English language model achieves a robust performance. In contrast, Multilingual BERT fails to obtain a good performance in cross-lingual hate speech detection. We also experimentally found that the external knowledge from a multilingual abusive lexicon is able to improve the models' performance, specifically in detecting the positive class. The results of our experimental evaluation highlight a number of challenges and issues in this particular task. One of the main challenges is related to the issue of current benchmarks for hate speech detection, in particular how bias related to the topical focus in the datasets influences the classification performance. The insufficient ability of current multilingual language models to transfer knowledge between languages in the specific hate speech detection task also remain an open problem. However, our experimental evaluation and our qualitative analysis show how the explicit integration of linguistic knowledge from a structured abusive language lexicon helps to alleviate this issue.
引用
收藏
页数:19
相关论文
共 77 条
[61]  
Qian J, 2018, P 2018 C N AM CHAPT, V2, P118
[62]  
Radford A., 2018, Improving language understanding by generative pretraining
[63]  
Reimers N., 2020, ABS200409813 CORR
[64]  
Rodriguez D., 2020, THESIS GOTEBORGS U
[65]  
Ross Bjorn, 2017, ABS170108118 CORR
[66]   Zero-Shot Multilingual Sentiment Analysis using Hierarchical Attentive Network and BERT [J].
Sarkar, Anindya ;
Reddy, Sujeeth ;
Iyengar, Raghu Sesha .
NLPIR 2019: 2019 3RD INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL, 2019, :49-56
[67]  
Schuster S, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P3795
[68]   Exposure to hate speech increases prejudice through desensitization [J].
Soral, Wiktor ;
Bilewicz, Michal ;
Winiewski, Mikolaj .
AGGRESSIVE BEHAVIOR, 2018, 44 (02) :136-146
[69]  
Stappen Lukas, 2020, ABS200413850 CORR
[70]  
Swamy S. D., 2019, P 23 C COMP NAT LANG, P940, DOI [10.18653/v1/K19-1088, DOI 10.18653/V1/K19-1088]