On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning

被引:0
作者
Jin, Xisen [1 ]
Barbieri, Francesco [2 ]
Kennedy, Brendan [1 ]
Davani, Aida Mostafazadeh [1 ]
Neves, Leonardo [2 ]
Ren, Xiang [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA USA
[2] Snap Inc, Santa Monica, CA USA
来源
2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021) | 2021年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution. Previous works focus on detecting these biases, reducing bias in data representations, and using auxiliary training objectives to mitigate bias during fine-tuning. Although these techniques achieve bias reduction for the task and domain at hand, the effects of bias mitigation may not directly transfer to new tasks, requiring additional data collection and customized annotation of sensitive attributes, and re-evaluation of appropriate fairness metrics. We explore the feasibility and benefits of upstream bias mitigation (UBM) for reducing bias on downstream tasks, by first applying bias mitigation to an upstream model through fine-tuning and subsequently using it for downstream fine-tuning. We find, in extensive experiments across hate speech detection, toxicity detection, occupation prediction, and coreference resolution tasks over various bias factors, that the effects of UBM are indeed transferable to new downstream tasks or domains via fine-tuning, creating less biased downstream models than directly fine-tuning on the downstream task or transferring from a vanilla upstream model. Though challenges remain, we show that UBM promises more efficient and accessible bias mitigation in LM fine-tuning.(12)
引用
收藏
页码:3770 / 3783
页数:14
相关论文
共 38 条
  • [1] [Anonymous], 2018, P 2018 C EMP METH NA
  • [2] Beutel A., 2017, 2017 WORKSH FAIRN AC
  • [3] Bhardwaj Rishabh, 2020, ABS200905021 ARXIV
  • [4] Blodgett S.L., 2020, P 58 ANN M ASS COMP, P5454, DOI [10.18653/v1/2020.acl-main.485, DOI 10.18653/V1/2020.ACL-MAIN.485]
  • [5] Bolukbasi T, 2016, ADV NEUR IN, V29
  • [6] Semantics derived automatically from language corpora contain human-like biases
    Caliskan, Aylin
    Bryson, Joanna J.
    Narayanan, Arvind
    [J]. SCIENCE, 2017, 356 (6334) : 183 - 186
  • [7] Dai AM, 2015, ADV NEUR IN, V28
  • [8] Davani Aida Mostafazadeh, 2020, P 58 ANN M ASS COMPU, P5435
  • [9] Davidson T., 2017, PROC INT AAAI C WEB, P512, DOI [DOI 10.1609/ICWSM.V11I1.14955, 10.1609/icwsm.v11i1.14955]
  • [10] De Gibert O., 2018, P 2 WORKSHOP ABUSIVE, DOI [DOI 10.18653/V1/W18-5102, DOI 10.18653/V1/W18]