Fortifying Toxic Speech Detectors Against Veiled Toxicity

被引:0
作者
Han, Xiaochuang [1 ]
Tsvetkov, Yulia [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP) | 2020年
关键词
RACIAL MICROAGGRESSIONS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Modern toxic speech detectors are incompetent in recognizing disguised offensive language, such as adversarial attacks that deliberately avoid known toxic lexicons, or manifestations of implicit bias. Building a large annotated dataset for such veiled toxicity can be very expensive. In this work, we propose a framework aimed at fortifying existing toxic speech detectors without a large labeled corpus of veiled toxicity. Just a handful of probing examples are used to surface orders of magnitude more disguised offenses. We augment the toxic speech detector's training data with these discovered offensive examples, thereby making it more robust to veiled toxicity while preserving its utility in detecting overt toxicity.(1) Warning: this paper contains examples that may be offensive or upsetting.
引用
收藏
页码:7732 / 7739
页数:8
相关论文
共 27 条
[1]  
Agarwal N, 2017, J MACH LEARN RES, V18
[2]  
Barshan Elnaz, 2020, P AISTATS
[3]  
Breitfeller Luke, 2019, P EMNLP
[4]  
Davidson Thomas, 2017, P 11 INT AAAI C WEB, DOI DOI 10.1609/ICWSM.V11I1.14955
[5]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[6]  
Field Anjalie, 2020, P EMNLP
[7]   A Survey on Automatic Detection of Hate Speech in Text [J].
Fortuna, Paula ;
Nunes, Sergio .
ACM COMPUTING SURVEYS, 2018, 51 (04)
[8]  
Founta A.-M., 2018, P ICWSM
[9]  
Han Xiaochuang, 2020, P ACL
[10]  
Jain Edwin, 2018, 2018 5th International Conference on Computational Science and Computational Intelligence (CSCI), P1136, DOI 10.1109/CSCI46756.2018.00220