On Privacy, Accuracy, and Fairness Trade-Offs in Facial Recognition

被引:0
作者
Zarei, Amir [1 ]
Hassanpour, Ahmad [1 ]
Raja, Kiran [2 ]
机构
[1] Norwegian Univ Sci & Technol, Dept Informat Secur & Commun Technol, N-2815 Gjovik, Norway
[2] Norwegian Univ Sci & Technol, Dept Comp Sci, N-2815 Gjovik, Norway
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Privacy; Accuracy; Training; Face recognition; Data models; Analytical models; Adaptation models; Stochastic processes; Noise; Ethnicity; Differential privacy; facial recognition; fairness; membership inference attack;
D O I
10.1109/ACCESS.2025.3536784
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Face recognition (FR) technology has a concurrent interaction between accuracy, privacy, and fairness. To investigate this, we present a deep learning FR model trained on the BUPT-Balancedface dataset, which is racially balanced, and incorporate differential privacy (DP) into the private variant of the model to ensure data confidentiality, retaining the focus on fairness aspects. We analyze the verification accuracy of private (with different privacy budgets) and non-private models using a variety of benchmark FR datasets. Our results show that the non-private model achieves reasonably high accuracy comparable to the current state-of-the-art models. The private model shows a trade-off between accuracy and privacy, as well as between fairness and privacy, meaning that enhancing privacy tends to reduce both accuracy and fairness. Our findings indicate that DP unevenly reduces accuracy across demographics and suggest that adjusting the privacy budget allows for better balancing of privacy, accuracy, and fairness. Furthermore, we extend our experiments to consider real-world bias by training our private model on the imbalanced CASIA-WebFace dataset, where variability in accuracy and fairness disparities are amplified, showing the impact of dataset composition on the interplay between privacy, accuracy, and fairness. Additionally, we show that traditional membership inference attacks (MIAs) can compromise privacy in FR systems. We further introduce a more realistic, identity-based MIA (I-MIA) tailored specifically for FR. Our analysis demonstrates that DP significantly mitigates privacy risks from both traditional MIAs and the proposed I-MIA.
引用
收藏
页码:26050 / 26062
页数:13
相关论文
共 85 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Albiero V, 2020, IEEE WINT C APPL COM, P81, DOI [10.1109/wacvw50321.2020.9096947, 10.1109/WACVW50321.2020.9096947]
  • [3] Anil R, 2021, Arxiv, DOI arXiv:2108.01624
  • [4] Backes M., 2016, P 2016 ACM SIGSAC C, P319, DOI DOI 10.1145/2976749.2978355
  • [5] Bagdasaryan E, 2019, ADV NEUR IN, V32
  • [6] Bavadekar S, 2021, Arxiv, DOI [arXiv:2107.01179, 10.48550/arXiv.2107.01179, DOI 10.48550/ARXIV.2107.01179]
  • [7] Bu ZQ, 2022, Arxiv, DOI arXiv:2205.10683
  • [8] VGGFace2: A dataset for recognising faces across pose and age
    Cao, Qiong
    Shen, Li
    Xie, Weidi
    Parkhi, Omkar M.
    Zisserman, Andrew
    [J]. PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, : 67 - 74
  • [9] Privacy Preserving Face Recognition Utilizing Differential Privacy
    Chamikara, M. A. P.
    Bertok, P.
    Khalil, I.
    Liu, D.
    Camtepe, S.
    [J]. COMPUTERS & SECURITY, 2020, 97
  • [10] Group Fairness via Group Consensus
    Chan, Eunice
    Liu, Zhining
    Qiu, Ruizhong
    Zhang, Yuheng
    Maciejewski, Ross
    Tong, Hanghang
    [J]. PROCEEDINGS OF THE 2024 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, ACM FACCT 2024, 2024, : 1788 - 1808