Towards Fair Deep Anomaly Detection

被引:26
作者
Zhang, Hongjing [1 ]
Davidson, Ian [1 ]
机构
[1] Univ Calif Davis, Davis, CA 95616 USA
来源
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021 | 2021年
关键词
machine learning; algorithmic fairness; anomaly detection; deep learning; adversarial learning;
D O I
10.1145/3442188.3445878
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Anomaly detection aims to find instances that are considered unusual and is a fundamental problem of data science. Recently, deep anomaly detection methods were shown to achieve superior results particularly in complex data such as images. Our work focuses on deep one-class classification for anomaly detection which learns a mapping only from the normal samples. However, the non-linear transformation performed by deep learning can potentially find patterns associated with social bias. The challenge with adding fairness to deep anomaly detection is to ensure both making fair and correct anomaly predictions simultaneously. In this paper, we propose a new architecture for the fair anomaly detection approach (Deep Fair SVDD) and train it using an adversarial network to de-correlate the relationships between the sensitive attributes and the learned representations. This differs from how fairness is typically added namely as a regularizer or a constraint. Further, we propose two effective fairness measures and empirically demonstrate that existing deep anomaly detection methods are unfair. We show that our proposed approach can remove the unfairness largely with minimal loss on the anomaly detection performance. Lastly, we conduct an in-depth analysis to show the strength and limitations of our proposed model, including parameter analysis, feature visualization, and run-time analysis.
引用
收藏
页码:138 / 148
页数:11
相关论文
共 45 条
[1]  
An J., 2015, Special Lecture IE, V2, P1
[2]  
Angwin J., 2022, Machine bias: There's software used across the country to predict future criminals, and it's biased against blacks, P254, DOI DOI 10.1201/9781003278290-37
[3]  
Backurs A., 2019, PR MACH LEARN RES, P405
[4]  
Beutel A, 2017, Arxiv, DOI arXiv:1707.00075
[5]  
Biddle D, 2006, Adverse Impact and Test Validation: A Practitioner's Guide to Valid and Defensible Employment Testing
[6]   LOF: Identifying density-based local outliers [J].
Breunig, MM ;
Kriegel, HP ;
Ng, RT ;
Sander, J .
SIGMOD RECORD, 2000, 29 (02) :93-104
[7]   Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees [J].
Celis, L. Elisa ;
Huang, Lingxiao ;
Keswani, Vijay ;
Vishnoi, Nisheeth K. .
FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2019, :319-328
[8]  
Chalapathy R, 2019, Arxiv, DOI [arXiv:1901.03407, 10.48550/ARXIV.1901.03407]
[9]  
Chierichetti F, 2017, ADV NEURAL INFORM PR, P5029
[10]  
Chouldechova A, 2018, Arxiv, DOI [arXiv:1810.08810, DOI 10.48550/ARXIV.1810.08810]