OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning

被引:20
|
作者
Zhang, Hantian [1 ]
Chu, Xu [1 ]
Asudeh, Abolfazl [2 ]
Navathe, Shamkant B. [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Univ Illinois, Chicago, IL USA
来源
SIGMOD '21: PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA | 2021年
关键词
Algorithmic Bias; Group Fairness; Declarative Systems;
D O I
10.1145/3448016.3452787
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) is increasingly being used to make decisions in our society. ML models, however, can be unfair to certain demographic groups (e.g., African Americans or females) according to various fairness metrics. Existing techniques for producing fair ML models either are limited to the type of fairness constraints they can handle (e.g., preprocessing) or require nontrivial modifications to downstream ML training algorithms (e.g., in-processing). We propose a declarative system Omni Fair for supporting group fairness in ML. Omni Fair features a declarative interface for users to specify desired group fairness constraints and supports all commonly used group fairness notions, including statistical parity, equalized odds, and predictive parity. Omni Fair is also model-agnostic in the sense that it does not require modifications to a chosen ML algorithm. Omni Fair also supports enforcing multiple user declared fairness constraints simultaneously while most previous techniques cannot. The algorithms in Omni Fair maximize model accuracy while meeting the specified fairness constraints, and their efficiency is optimized based on the theoretically provable monotonicity property regarding the trade-off between accuracy and fairness that is unique to our system. We conduct experiments on commonly used datasets that exhibit bias against minority groups in the fairness literature. We show that OmniFair is more versatile than existing algorithmic fairness approaches in terms of both supported fairness constraints and downstream ML models. OmniFair reduces the accuracy loss by up to 94.8% compared with the second best method. Omni Fair also achieves similar running time to preprocessing methods, and is up to 270x faster than in-processing methods.
引用
收藏
页码:2076 / 2088
页数:13
相关论文
共 50 条
  • [21] SAMA: Spatially-Aware Model-Agnostic Machine Learning Framework for Geophysical Data
    Yamani, Asma Z.
    Katterbaeur, Klemens
    Alshehri, Abdallah A.
    Al-Zaidy, Rabeah A.
    IEEE ACCESS, 2023, 11 : 7436 - 7449
  • [22] Evaluating Local Interpretable Model-Agnostic Explanations on Clinical Machine Learning Classification Models
    Kumarakulasinghe, Nesaretnam Barr
    Blomberg, Tobias
    Lin, Jintai
    Leao, Alexandra Saraiva
    Papapetrou, Panagiotis
    2020 IEEE 33RD INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS(CBMS 2020), 2020, : 7 - 12
  • [23] Evaluation of the factors explaining the use of agricultural land: A machine learning and model-agnostic approach
    Viana, Claudia M.
    Santos, Mauricio
    Freire, Dulce
    Abrantes, Patricia
    Rocha, Jorge
    ECOLOGICAL INDICATORS, 2021, 131
  • [24] The Research about Recurrent Model-Agnostic Meta Learning
    Optical Memory and Neural Networks, 2020, 29 : 56 - 67
  • [25] Responsible model deployment via model-agnostic uncertainty learning
    Lahoti, Preethi
    Gummadi, Krishna
    Weikum, Gerhard
    MACHINE LEARNING, 2023, 112 (03) : 939 - 970
  • [26] Responsible model deployment via model-agnostic uncertainty learning
    Preethi Lahoti
    Krishna Gummadi
    Gerhard Weikum
    Machine Learning, 2023, 112 : 939 - 970
  • [27] Model-Agnostic Metric for Zero-Shot Learning
    Shen, Jiayi
    Wang, Haochen
    Zhang, Anran
    Qiu, Qiang
    Zhen, Xiantong
    Cao, Xianbin
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 775 - 784
  • [28] MODEL-AGNOSTIC META-LEARNING FOR RESILIENCE OPTIMIZATION OF ARTIFICIAL INTELLIGENCE SYSTEM
    Moskalenko, V. V.
    RADIO ELECTRONICS COMPUTER SCIENCE CONTROL, 2023, (02) : 79 - 90
  • [29] Knowledge Distillation for Model-Agnostic Meta-Learning
    Zhang, Min
    Wang, Donglin
    Gai, Sibo
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1355 - 1362
  • [30] Learning Model-Agnostic Counterfactual Explanations for Tabular Data
    Pawelczyk, Martin
    Broelemann, Klaus
    Kasneci, Gjergji
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 3126 - 3132