Fairness-Aware Regression Robust to Adversarial Attacks

被引:3
作者
Jin, Yulu [1 ]
Lai, Lifeng [1 ]
机构
[1] Univ Calif Davis, Dept Elect & Comp Engn, Davis, CA 95616 USA
基金
美国国家科学基金会;
关键词
Data models; Predictive models; Numerical models; Training; Robustness; Linear programming; Signal processing algorithms; Fairness; minimax problem; adversarial robustness; APPROXIMATION;
D O I
10.1109/TSP.2023.3328111
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we take a first step towards answering the question of how to design fair machine learning algorithms that are robust to adversarial attacks. Using a minimax framework, we aim to design an adversarially robust fair regression model that achieves optimal performance in the presence of an attacker who is able to add a carefully designed adversarial data point to the dataset or perform a rank-one attack on the dataset. By solving the proposed nonsmooth nonconvex-nonconcave minimax problem, the optimal adversary as well as the robust fairness-aware regression model are obtained. For both synthetic data and real-world datasets, numerical results illustrate that the proposed adversarially robust fair models have better performance on poisoned datasets than other fair machine learning models in both prediction accuracy and group-based fairness measure.
引用
收藏
页码:4092 / 4105
页数:14
相关论文
共 41 条
[1]  
Agarwal A, 2019, PR MACH LEARN RES, V97
[2]  
Benz Philipp, 2021, WORKSHOP ADVERSARIAL
[3]  
Chang HY, 2020, Arxiv, DOI arXiv:2006.08669
[4]  
Chen XY, 2017, Arxiv, DOI arXiv:1712.05526
[5]  
Chi Jianfeng, 2021, PR MACH LEARN RES
[6]  
Chzhen E., 2020, Advances in Neural Information Processing Systems, V33, P7321
[7]  
Corbett-Davies S, 2018, Arxiv, DOI arXiv:1808.00023
[8]   Certifying and Removing Disparate Impact [J].
Feldman, Michael ;
Friedler, Sorelle A. ;
Moeller, John ;
Scheidegger, Carlos ;
Venkatasubramanian, Suresh .
KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, :259-268
[9]  
Hardt M, 2016, ADV NEUR IN, V29
[10]  
Hmam H., 2010, Technical Report 2416