A joint learning approach with knowledge injection for zero-shot cross-lingual hate speech detection

被引:44
作者
Pamungkas, Endang Wahyu [1 ]
Basile, Valerio [1 ]
Patti, Viviana [1 ]
机构
[1] Univ Turin, Dept Comp Sci, Turin, Italy
关键词
Hate speech detection; Cross-lingual classification; Social media; Transfer learning; Zero-shot learning; ETHNOPHAULISMS;
D O I
10.1016/j.ipm.2021.102544
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Hate speech is an increasingly important societal issue in the era of digital communication. Hateful expressions often make use of figurative language and, although they represent, in some sense, the dark side of language, they are also often prime examples of creative use of language. While hate speech is a global phenomenon, current studies on automatic hate speech detection are typically framed in a monolingual setting. In this work, we explore hate speech detection in low-resource languages by transferring knowledge from a resource-rich language, English, in a zero-shot learning fashion. We experiment with traditional and recent neural architectures, and propose two joint-learning models, using different multilingual language representations to transfer knowledge between pairs of languages. We also evaluate the impact of additional knowledge in our experiment, by incorporating information from a multilingual lexicon of abusive words. The results show that our joint-learning models achieve the best performance on most languages. However, a simple approach that uses machine translation and a pre-trained English language model achieves a robust performance. In contrast, Multilingual BERT fails to obtain a good performance in cross-lingual hate speech detection. We also experimentally found that the external knowledge from a multilingual abusive lexicon is able to improve the models' performance, specifically in detecting the positive class. The results of our experimental evaluation highlight a number of challenges and issues in this particular task. One of the main challenges is related to the issue of current benchmarks for hate speech detection, in particular how bias related to the topical focus in the datasets influences the classification performance. The insufficient ability of current multilingual language models to transfer knowledge between languages in the specific hate speech detection task also remain an open problem. However, our experimental evaluation and our qualitative analysis show how the explicit integration of linguistic knowledge from a structured abusive language lexicon helps to alleviate this issue.
引用
收藏
页数:19
相关论文
共 77 条
  • [1] Agarwal S., 2017, ABS170104931 CORR
  • [2] Alfina I, 2017, INT C ADV COMP SCI I, P233, DOI 10.1109/ICACSIS.2017.8355039
  • [3] Aluru S. S., 2020, ABS200406465 CORR
  • [4] [Anonymous], 2011, P 49 ANN M ASS COMPU, DOI DOI 10.5555/2002736.2002823
  • [5] [Anonymous], 2018, P 2 WORKSHOP ABUSIVE, DOI DOI 10.18653/V1/W18-5110
  • [6] Arango A., 2020, INFORM SYST
  • [7] Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
    Artetxe, Mikel
    Schwenk, Holger
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2019, 7 : 597 - 610
  • [8] Deep Learning for Hate Speech Detection in Tweets
    Badjatiya, Pinkesh
    Gupta, Shashank
    Gupta, Manish
    Varma, Vasudeva
    [J]. WWW'17 COMPANION: PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB, 2017, : 759 - 760
  • [9] Basile Valerio, 2019, P 13 INT WORKSH SEM, P54, DOI [10.18653, DOI 10.18653/V1/S19-2007]
  • [10] Bosco C., 2018, EVALITA 2018 6 EV CA, V2263, P1