Imparting Fairness to Pre-Trained Biased Representations

被引:1
|
作者
Sadeghi, Bashir [1 ]
Boddeti, Vishnu Naresh [1 ]
机构
[1] Michigan State Univ, E Lansing, MI 48824 USA
关键词
D O I
10.1109/CVPRW50498.2020.00016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial representation learning is a promising paradigm for obtaining data representations that are invariant to certain sensitive attributes while retaining the information necessary for predicting target attributes. Existing approaches solve this problem through iterative adversarial minimax optimization and lack theoretical guarantees. In this paper, we first study the "linear" form of this problem i.e., the setting where all the players are linear functions. We show that the resulting optimization problem is both non-convex and non-differentiable. We obtain an exact closed-form expression for its global optima through spectral learning. We then extend this solution and analysis to non-linear functions through kernel representation. Numerical experiments on UCI and CIFAR-100 datasets indicate that, (a) practically, our solution is ideal for "imparting" provable invariance to any biased pre-trained data representation, and (b) empirically, the trade-off between utility and invariance provided by our solution is comparable to iterative minimax optimization of existing deep neural network based approaches. Code is available at Human Analysis Lab.
引用
收藏
页码:75 / 82
页数:8
相关论文
共 50 条
  • [1] Assessing Multilingual Fairness in Pre-trained Multimodal Representations
    Wang, Jialu
    Liu, Yang
    Wang, Xin Eric
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2681 - 2695
  • [2] Pre-trained Affective Word Representations
    Chawla, Kushal
    Khosla, Sopan
    Chhaya, Niyati
    Jaidka, Kokil
    2019 8TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2019,
  • [3] Diffused Redundancy in Pre-trained Representations
    Nanda, Vedant
    Speicher, Till
    Dickerson, John P.
    Gummadi, Krishna P.
    Feizi, Soheil
    Weller, Adrian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models
    Delobelle, Pieter
    Tokpo, Ewoenam Kwaku
    Calders, Toon
    Berendt, Bettina
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 1693 - 1706
  • [5] On the Language Neutrality of Pre-trained Multilingual Representations
    Libovicky, Jindrich
    Rosa, Rudolf
    Fraser, Alexander
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1663 - 1674
  • [6] Pre-trained molecular representations enable antimicrobial discovery
    Roberto Olayo-Alarcon
    Martin K. Amstalden
    Annamaria Zannoni
    Medina Bajramovic
    Cynthia M. Sharma
    Ana Rita Brochado
    Mina Rezaei
    Christian L. Müller
    Nature Communications, 16 (1)
  • [7] Pre-trained Language Model Representations for Language Generation
    Edunov, Sergey
    Baevski, Alexei
    Auli, Michael
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 4052 - 4059
  • [8] Inverse Problems Leveraging Pre-trained Contrastive Representations
    Ravula, Sriram
    Smyrnis, Georgios
    Jordan, Matt
    Dimakis, Alexandros G.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] Are Pre-trained Convolutions Better than Pre-trained Transformers?
    Tay, Yi
    Dehghani, Mostafa
    Gupta, Jai
    Aribandi, Vamsi
    Bahri, Dara
    Qin, Zhen
    Metzler, Donald
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4349 - 4359
  • [10] Pre-trained Visual Dynamics Representations for Efficient Policy Learning
    Luc, Hao
    Zhou, Bohan
    Lu, Zongqing
    COMPUTER VISION - ECCV 2024, PT LXXXI, 2025, 15139 : 249 - 267