Fair Contrastive Learning for Facial Attribute Classification

被引:27
作者
Park, Sungho [1 ]
Lee, Jewook [1 ]
Lee, Pilhyeon [1 ]
Hwang, Sunhee [2 ]
Kim, Dohyung [3 ]
Byun, Hyeran [1 ]
机构
[1] Yonsei Univ, Seoul, South Korea
[2] LG Uplus, Seoul, South Korea
[3] SK Inc, Seoul, South Korea
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/CVPR52688.2022.01014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning visual representation of high quality is essential for image classification. Recently, a series of contrastive representation learning methods have achieved preeminent success. Particularly, SupCon [18] outperformed the dominant methods based on cross-entropy loss in representation learning. However; we notice that there could be potential ethical risks in supervised contrastive learning. In this paper, we for the first time analyze unfairness caused by supervised contrastive learning and propose a new Fair Supervised Contrastive Loss (FSCL) for fair visual representation learning. Inheriting the philosophy of supervised contrastive learning, it encourages representation of the same class to be closer to each other than that of different classes, while ensuring fairness by penalizing the inclusion of sensitive attribute information in representation. In addition, we introduce a group-wise normalization to diminish the disparities of introgroup compactness and inter-class separability between demographic groups that arouse unfair classification. Through extensive experiments on CelebA and UTK Face, we validate that the proposed method significantly outperforms SupCon and existing state-of-the-art methods in terms of the trade-off between top-1 accuracy and fairness. Moreover, our method is robust to the intensity of data bias and effectively works in incomplete supervised settings. Our code is available at https://github.com/sungho-Coo1G/FSCL.
引用
收藏
页码:10379 / 10388
页数:10
相关论文
共 45 条
  • [1] [Anonymous], 2016, Asian Conference on Computer Vision
  • [2] Fine-Grained Face Annotation Using Deep Multi-Task CNN
    Celona, Luigi
    Bianco, Simone
    Schettini, Raimondo
    [J]. SENSORS, 2018, 18 (08)
  • [3] Creager E, 2019, PR MACH LEARN RES, V97
  • [4] Randaugment: Practical automated data augmentation with a reduced search space
    Cubuk, Ekin D.
    Zoph, Barret
    Shlens, Jonathon
    Le, Quoc, V
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3008 - 3017
  • [5] Dougherty Conor, 2015, TWITTER, P1
  • [6] Dwork C., 2012, P 3 INN THEOR COMP S, P214, DOI [https://doi.org/10.1145/2090236.2090255, 10.1145/2090236.2090255, DOI 10.1145/2090236.2090255]
  • [7] Gretton A, 2012, J MACH LEARN RES, V13, P723
  • [8] Hardt Moritz., 2016, Proceedings of the 30th International Conference on Neural Information Processing Systems, P3323, DOI DOI 10.1109/ICCV.2015.169
  • [9] Gaussian Affinity for Max-margin Class Imbalanced Learning
    Hayat, Munawar
    Khan, Salman
    Zamir, Syed Waqas
    Shen, Jianbing
    Shao, Ling
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6478 - 6488
  • [10] Momentum Contrast for Unsupervised Visual Representation Learning
    He, Kaiming
    Fan, Haoqi
    Wu, Yuxin
    Xie, Saining
    Girshick, Ross
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 9726 - 9735