In-Group Bias in Deep Learning-Based Face Recognition Models Due to Ethnicity and Age

被引:4
作者
Nagpal, Shruti [1 ]
Singh, Maneet [1 ]
Singh, Richa [2 ]
Vatsa, Mayank [2 ]
Ratha, Nalini K. Ratha [3 ]
机构
[1] Indraprastha Institute of Information Technology Delhi, Department of Computer Science and Engineering, New Delhi,110020, India
[2] Indian Institute of Technology Jodhpur, Department of Computer Science and Engineering, Jodhpur,342011, India
[3] University at Buffalo, Department of Computer Science and Engineering, Buffalo,NY,14260, United States
来源
IEEE Transactions on Technology and Society | 2023年 / 4卷 / 01期
关键词
Behavioral research - Deep learning - Encoding (symbols) - Job analysis;
D O I
10.1109/TTS.2023.3241010
中图分类号
学科分类号
摘要
Humans are known to favor other individuals who exist in similar groups as them, exhibiting biased behavior, which is termed as in-group bias. The groups could be formed on the basis of ethnicity, age, or even a favorite sports team. Taking cues from aforementioned observation, we inspect if deep learning networks also mimic this human behavior, and are affected by in-group and out-group biases. In this first of its kind research, the behavior of face recognition models is evaluated to understand if similar to humans, models also encode group-specific features for face recognition, along with where bias is encoded in these models. Analysis has been performed for two use-cases of bias: age and ethnicity in face recognition models. Thorough experimental evaluation leads us to several insights: (i) deep learning models focus on different facial regions for different ethnic groups and age groups, and (ii) large variation in face verification performance is also observed across different sub-groups for both known and our own trained deep networks. Based on the observations, a novel bias index is presented for evaluating a trained model's level of bias. We believe that a better understanding of how deep learning models work and encode bias, along with the proposed bias index would enable researchers to address the challenge of bias in AI, and develop more robust and fairer algorithms for mitigating bias as well as developing fairer models. © 2020 IEEE.
引用
收藏
页码:54 / 67
相关论文
empty
未找到相关数据