On Privacy, Accuracy, and Fairness Trade-Offs in Facial Recognition

被引:0
作者
Zarei, Amir [1 ]
Hassanpour, Ahmad [1 ]
Raja, Kiran [2 ]
机构
[1] Norwegian Univ Sci & Technol, Dept Informat Secur & Commun Technol, N-2815 Gjovik, Norway
[2] Norwegian Univ Sci & Technol, Dept Comp Sci, N-2815 Gjovik, Norway
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Privacy; Accuracy; Training; Face recognition; Data models; Analytical models; Adaptation models; Stochastic processes; Noise; Ethnicity; Differential privacy; facial recognition; fairness; membership inference attack;
D O I
10.1109/ACCESS.2025.3536784
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Face recognition (FR) technology has a concurrent interaction between accuracy, privacy, and fairness. To investigate this, we present a deep learning FR model trained on the BUPT-Balancedface dataset, which is racially balanced, and incorporate differential privacy (DP) into the private variant of the model to ensure data confidentiality, retaining the focus on fairness aspects. We analyze the verification accuracy of private (with different privacy budgets) and non-private models using a variety of benchmark FR datasets. Our results show that the non-private model achieves reasonably high accuracy comparable to the current state-of-the-art models. The private model shows a trade-off between accuracy and privacy, as well as between fairness and privacy, meaning that enhancing privacy tends to reduce both accuracy and fairness. Our findings indicate that DP unevenly reduces accuracy across demographics and suggest that adjusting the privacy budget allows for better balancing of privacy, accuracy, and fairness. Furthermore, we extend our experiments to consider real-world bias by training our private model on the imbalanced CASIA-WebFace dataset, where variability in accuracy and fairness disparities are amplified, showing the impact of dataset composition on the interplay between privacy, accuracy, and fairness. Additionally, we show that traditional membership inference attacks (MIAs) can compromise privacy in FR systems. We further introduce a more realistic, identity-based MIA (I-MIA) tailored specifically for FR. Our analysis demonstrates that DP significantly mitigates privacy risks from both traditional MIAs and the proposed I-MIA.
引用
收藏
页码:26050 / 26062
页数:13
相关论文
共 85 条
  • [51] Mangold P, 2023, Arxiv, DOI arXiv:2210.16242
  • [52] Mao Y., 2018, P USENIX WORKSH HOT, P1
  • [53] Gender Artifacts in Visual Datasets
    Meister, Nicole
    Zhao, Dora
    Wang, Angelina
    Ramaswamy, Vikram V.
    Fong, Ruth
    Russakovsky, Olga
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4814 - 4825
  • [54] Mironov Ilya, 2019, arXiv
  • [55] AgeDB: the first manually collected, in-the-wild age database
    Moschoglou, Stylianos
    Papaioannou, Athanasios
    Sagonas, Christos
    Deng, Jiankang
    Kotsia, Irene
    Zafeiriou, Stefanos
    [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1997 - 2005
  • [56] Pannekoek M, 2021, Arxiv, DOI arXiv:2102.05975
  • [57] Park J., 2023, INT C MACH LEARN ICM, P27204
  • [58] Fair Contrastive Learning for Facial Attribute Classification
    Park, Sungho
    Lee, Jewook
    Lee, Pilhyeon
    Hwang, Sunhee
    Kim, Dohyung
    Byun, Hyeran
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10379 - 10388
  • [59] ACCELERATION OF STOCHASTIC-APPROXIMATION BY AVERAGING
    POLYAK, BT
    JUDITSKY, AB
    [J]. SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 1992, 30 (04) : 838 - 855
  • [60] Pyrgelis A, 2017, Arxiv, DOI arXiv:1708.06145