Imparting Fairness to Pre-Trained Biased Representations

被引:1
|
作者
Sadeghi, Bashir [1 ]
Boddeti, Vishnu Naresh [1 ]
机构
[1] Michigan State Univ, E Lansing, MI 48824 USA
关键词
D O I
10.1109/CVPRW50498.2020.00016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial representation learning is a promising paradigm for obtaining data representations that are invariant to certain sensitive attributes while retaining the information necessary for predicting target attributes. Existing approaches solve this problem through iterative adversarial minimax optimization and lack theoretical guarantees. In this paper, we first study the "linear" form of this problem i.e., the setting where all the players are linear functions. We show that the resulting optimization problem is both non-convex and non-differentiable. We obtain an exact closed-form expression for its global optima through spectral learning. We then extend this solution and analysis to non-linear functions through kernel representation. Numerical experiments on UCI and CIFAR-100 datasets indicate that, (a) practically, our solution is ideal for "imparting" provable invariance to any biased pre-trained data representation, and (b) empirically, the trade-off between utility and invariance provided by our solution is comparable to iterative minimax optimization of existing deep neural network based approaches. Code is available at Human Analysis Lab.
引用
收藏
页码:75 / 82
页数:8
相关论文
共 50 条
  • [21] Learning to Select Pre-trained Deep Representations with Bayesian Evidence Framework
    Kim, Yong-Deok
    Jang, Taewoong
    Han, Bohyung
    Choi, Seungjin
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5318 - 5326
  • [22] XPhoneBERT: A Pre-trained Multilingual Model for Phoneme Representations for Text-to-Speech
    Nguyen, Linh The
    Pham, Thinh
    Nguyen, Dat Quoc
    INTERSPEECH 2023, 2023, : 5506 - 5510
  • [23] Exploiting Word Semantics to Enrich Character Representations of Chinese Pre-trained Models
    Li, Wenbiao
    Sun, Rui
    Wu, Yunfang
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT I, 2022, 13551 : 3 - 15
  • [24] Data-Centric Explainable Debiasing for Improving Fairness in Pre-trained Language Models
    Li, Yingji
    Du, Mengnan
    Song, Rui
    Wang, Xin
    Wang, Ying
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 3773 - 3786
  • [25] FairFix: Enhancing Fairness of Pre-trained Deep Neural Networks with Scarce Data Resources
    Li, Zhixin
    Zhu, Rui
    Wang, Zihao
    Li, Jiale
    Liu, Kaiyuan
    Qin, Yue
    Fan, Yongming
    Gu, Mingyu
    Lu, Zhihui
    Wu, Jie
    Chai, Hongfeng
    Wang, XiaoFeng
    Tang, Haixu
    PROCEEDINGS OF THE 2024 IEEE 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT DATA AND SECURITY, IDS 2024, 2024, : 14 - 20
  • [26] Connecting Pre-trained Language Models and Downstream Tasks via Properties of Representations
    Wu, Chenwei
    Lee, Holden
    Ge, Rong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [27] BYOL for Audio: Exploring Pre-Trained General-Purpose Audio Representations
    Niizumi, Daisuke
    Takeuchi, Daiki
    Ohishi, Yasunori
    Harada, Noboru
    Kashino, Kunio
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 137 - 151
  • [28] Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension
    Yang, An
    Wang, Quan
    Liu, Jing
    Liu, Kai
    Lyu, Yajuan
    Wu, Hua
    She, Qiaoqiao
    Li, Sujian
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 2346 - 2357
  • [29] BiTimeBERT: Extending Pre-Trained Language Representations with Bi-Temporal Information
    Wang, Jiexin
    Jatowt, Adam
    Yoshikawa, Masatoshi
    Cai, Yi
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 812 - 821
  • [30] All Together Now! The Benefits of Adaptively Fusing Pre-trained Deep Representations
    Resheff, Yehezkel
    Lieder, Itay
    Hope, Tom
    ICPRAM: PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS, 2019, : 135 - 144