Fair Preprocessing: Towards Understanding Compositional Fairness of Data Transformers in Machine Learning Pipeline

被引:59
作者
Biswas, Sumon [1 ]
Rajan, Hridesh [1 ]
机构
[1] Iowa State Univ, Dept Comp Sci, Ames, IA 50011 USA
来源
PROCEEDINGS OF THE 29TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '21) | 2021年
关键词
fairness; machine learning; preprocessing; pipeline; models; IMPACT;
D O I
10.1145/3468264.3468536
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In recent years, many incidents have been reported where machine learning models exhibited discrimination among people based on race, sex, age, etc. Research has been conducted to measure and mitigate unfairness in machine learning models. For a machine learning task, it is a common practice to build a pipeline that includes an ordered set of data preprocessing stages followed by a classifier. However, most of the research on fairness has considered a single classifier based prediction task. What are the fairness impacts of the preprocessing stages in machine learning pipeline? Furthermore, studies showed that often the root cause of unfairness is ingrained in the data itself, rather than the model. But no research has been conducted to measure the unfairness caused by a specific transformation made in the data preprocessing stage. In this paper, we introduced the causal method of fairness to reason about the fairness impact of data preprocessing stages in ML pipeline. We leveraged existing metrics to define the fairness measures of the stages. Then we conducted a detailed fairness evaluation of the preprocessing stages in 37 pipelines collected from three different sources. Our results show that certain data transformers are causing the model to exhibit unfairness. We identified a number of fairness patterns in several categories of data transformers. Finally, we showed how the local fairness of a preprocessing stage composes in the global fairness of the pipeline. We used the fairness composition to choose appropriate downstream transformer that mitigates unfairness in the machine learning pipeline.
引用
收藏
页码:981 / 993
页数:13
相关论文
共 77 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Adebayo J., 2016, ARXIV161104967
[3]   Black Box Fairness Testing of Machine Learning Models [J].
Aggarwal, Aniya ;
Lohia, Pranay ;
Nagar, Seema ;
Dey, Kuntal ;
Saha, Diptikalyan .
ESEC/FSE'2019: PROCEEDINGS OF THE 2019 27TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2019, :625-635
[4]   Software Engineering for Machine Learning: A Case Study [J].
Amershi, Saleema ;
Begel, Andrew ;
Bird, Christian ;
DeLine, Robert ;
Gall, Harald ;
Kamar, Ece ;
Nagappan, Nachiappan ;
Nushi, Besmira ;
Zimmermann, Thomas .
2019 IEEE/ACM 41ST INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING IN PRACTICE (ICSE-SEIP 2019), 2019, :291-300
[5]  
Angwin J., 2016, PROPUBLICA
[6]  
[Anonymous], 2011, CNN Money
[7]   Themis-ml: A Fairness-Aware Machine Learning Interface for End-To-End Discrimination Discovery and Mitigation [J].
Bantilan, Niels .
JOURNAL OF TECHNOLOGY IN HUMAN SERVICES, 2018, 36 (01) :15-30
[8]   TFX: A TensorFlow-Based Production-Scale Machine Learning Platform [J].
Baylor, Denis ;
Breck, Eric ;
Cheng, Heng-Tze ;
Fiedel, Noah ;
Foo, Chuan Yu ;
Haque, Zakaria ;
Haykal, Salem ;
Ispir, Mustafa ;
Jain, Vihan ;
Koc, Levent ;
Koo, Chiu Yuen ;
Lew, Lukasz ;
Mewald, Clemens ;
Modi, Akshay Naresh ;
Polyzotis, Neoklis ;
Ramesh, Sukriti ;
Roy, Sudip ;
Whang, Steven Euijong ;
Wicke, Martin ;
Wilkiewicz, Jarek ;
Zhang, Xin ;
Zinkevich, Martin .
KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, :1387-1395
[9]  
Bellamy R.K., 2018, ARXIV181001943
[10]  
Binns Reuben., 2017, Proceedings of Machine Learning Research, V81, P1