Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns

被引:0
作者
Choi, YooJung [1 ]
Farnadi, Golnoosh [2 ,3 ]
Babaki, Behrouz [4 ]
Van den Broeck, Guy [1 ]
机构
[1] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
[2] Mila, Montreal, PQ, Canada
[3] Univ Montreal, Montreal, PQ, Canada
[4] Polytech Montreal, Montreal, PQ, Canada
来源
THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2020年 / 34卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As machine learning is increasingly used to make real-world decisions, recent research efforts aim to define and ensure fairness in algorithmic decision making. Existing methods often assume a fixed set of observable features to define individuals, but lack a discussion of certain features not being observed at test time. In this paper, we study fairness of naive Bayes classifiers, which allow partial observations. In particular, we introduce the notion of a discrimination pattern, which refers to an individual receiving different classifications depending on whether some sensitive attributes were observed. Then a model is considered fair if it has no such pattern. We propose an algorithm to discover and mine for discrimination patterns in a naive Bayes classifier, and show how to learn maximum-likelihood parameters subject to these fairness constraints. Our approach iteratively discovers and eliminates discrimination patterns until a fair model is learned. An empirical evaluation on three real-world datasets demonstrates that we can remove exponentially many discrimination patterns by only adding a small fraction of them as constraints.
引用
收藏
页码:10077 / 10084
页数:8
相关论文
共 29 条
[1]  
[Anonymous], 2017, P 31 INT C NEUR INF
[2]   Three naive Bayes approaches for discrimination-free classification [J].
Calders, Toon ;
Verwer, Sicco .
DATA MINING AND KNOWLEDGE DISCOVERY, 2010, 21 (02) :277-292
[3]   Building Classifiers with Independency Constraints [J].
Calders, Toon ;
Kamiran, Faisal ;
Pechenizkiy, Mykola .
2009 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2009), 2009, :13-18
[4]   Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments [J].
Chouldechova, Alexandra .
BIG DATA, 2017, 5 (02) :153-163
[5]  
Darwiche A, 2009, MODELING AND REASONING WITH BAYESIAN NETWORKS, P1, DOI 10.1017/CBO9780511811357
[6]  
Datta Amit, 2015, Proceedings on Privacy Enhancing Technologies, V1, P92, DOI 10.1515/popets-2015-0007
[7]  
Dechter Rina, 2013, Synthesis Lectures on Artificial Intelligence and Machine Learning, DOI [10.2200/S00529ED1V01Y201308AIM023, DOI 10.2200/S00529ED1V01Y201308AIM023]
[8]  
Dwork C., 2012, P 3 INN THEOR COMP S, P214, DOI DOI 10.1145/2090236.2090255
[9]   GEOMETRIC-PROGRAMMING - METHODS, COMPUTATIONS AND APPLICATIONS [J].
ECKER, JG .
SIAM REVIEW, 1980, 22 (03) :338-362
[10]   Fairness in Relational Domains [J].
Farnadi, Golnoosh ;
Babaki, Behrouz ;
Getoor, Lise .
PROCEEDINGS OF THE 2018 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY (AIES'18), 2018, :108-114