A multifaceted approach to detect gender biases in Natural Language Generation

被引:0
作者
Consuegra-Ayala, Juan Pablo [1 ]
Martinez-Murillo, Ivan [3 ]
Lloret, Elena [2 ,3 ]
Moreda, Paloma [2 ,3 ]
Palomar, Manuel [2 ,3 ]
机构
[1] Univ Havana, Sch Math & Comp Sci, Havana 10200, Cuba
[2] Univ Alicante, Univ Inst Comp Res IUII, Alicante 03690, Spain
[3] Univ Alicante, Dept Language & Comp Syst, Alicante 03690, Spain
关键词
Natural Language Generation; Gender bias; Common sense resources; RISK;
D O I
10.1016/j.knosys.2024.112367
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in generative models have skyrocketed the popularity of conversational chatbots and have revolutionized the way we interact with artificial intelligence. At the same time, research has shown that machine learning models can unconsciously reflect and amplify human biases. This is particularly dangerous for generative models given the huge popularity of such technologies. Specifically, a fundamental source of bias of such technologies is the resources on which the models are trained. To address this issue, this paper proposes a methodology to analyze intrinsic gender bias in Natural Language Generation (NLG). Some works already propose metrics and approaches to measure bias in the Natural Language processing field. However, there is a lack of standard methodology to measure gender bias in NLG. Therefore, adapting the Bias Score approach, our proposal involves three sequential stages applied to individual texts to detect intrinsic gender bias on NLG effectively. Those steps are as follows: (i) word scoring; (ii) word filtering; and (iii) generative-word analysis. This methodology is applied to recent datasets and pre-trained models widely used for the generation of text with common sense. In particular, this paper analyzes the potential gender bias in CommonGen and C2Gen 2 Gen datasets and the SimpleNLG and T5 models. The results show the ability of the proposed methodology to detect gender bias in word distributions, presenting a strong correlation with the words typically associated with a specific gender. Results indicate that both tested datasets are intrinsically gender-biased, and therefore, tested models fine-tuned with those datasets also are.
引用
收藏
页数:12
相关论文
共 53 条
  • [1] Aggarwal A., 2022, FINDINGS ASS COMPUTA, P6022, DOI 10.18653/v1/2022.findings-emnlp.445
  • [2] Aghahadi Zeinab, 2022, Journal of Applied Non-Classical Logics, V32, P55, DOI 10.1080/11663081.2022.2041352
  • [3] Machine-Learning-Based Disease Diagnosis: A Comprehensive Review
    Ahsan, Md Manjurul
    Luna, Shahana Akter
    Siddique, Zahed
    [J]. HEALTHCARE, 2022, 10 (03)
  • [4] [Anonymous], 2019, P 1 WORKSHOP GENDER
  • [5] Basta C, 2019, GENDER BIAS IN NATURAL LANGUAGE PROCESSING (GEBNLP 2019), P33
  • [6] Belz A., 2005, P 3 CORP LING C
  • [7] Bolukbasi T, 2016, ADV NEUR IN, V29
  • [8] Bordia S, 2019, Arxiv, DOI arXiv:1904.03035
  • [9] Borji A, 2023, Arxiv, DOI [arXiv:2302.03494, 10.48550/arxiv.2302.03494, DOI 10.48550/ARXIV.2302.03494]
  • [10] EVALUATING THE PREDICTIVE VALIDITY OF THE COMPAS RISK AND NEEDS ASSESSMENT SYSTEM
    Brennan, Tim
    Dieterich, William
    Ehret, Beate
    [J]. CRIMINAL JUSTICE AND BEHAVIOR, 2009, 36 (01) : 21 - 40