Explainable machine learning for public policy: Use cases, gaps, and research directions

被引:12
作者
Amarasinghe, Kasun [1 ,2 ]
Rodolfa, Kit T. [1 ,2 ]
Lamba, Hemank [1 ,2 ]
Ghani, Rayid [1 ,2 ]
机构
[1] Carnegie Mellon Univ, Machine Learning Dept, 4902 Forbes Ave, Pittsburgh, PA 15213 USA
[2] Carnegie Mellon Univ, Heinz Coll Informat Syst & Publ Policy, 4902 Forbes Ave, Pittsburgh, PA 15213 USA
来源
DATA & POLICY | 2023年 / 5卷
基金
美国安德鲁·梅隆基金会;
关键词
explainable machine learning; interpretable machine learning; public policy; BLACK-BOX; EXPLANATIONS; MODELS; RULES;
D O I
10.1017/dap.2023.2
中图分类号
C93 [管理学]; D035 [国家行政管理]; D523 [行政管理]; D63 [国家行政管理];
学科分类号
12 ; 1201 ; 1202 ; 120202 ; 1204 ; 120401 ;
摘要
Explainability is highly desired in machine learning (ML) systems supporting high-stakes policy decisions in areas such as health, criminal justice, education, and employment. While the field of explainable ML has expanded in recent years, much of this work has not taken real-world needs into account. A majority of proposed methods are designed with generic explainability goals without well-defined use cases or intended end users and evaluated on simplified tasks, benchmark problems/datasets, or with proxy users (e.g., Amazon Mechanical Turk). We argue that these simplified evaluation settings do not capture the nuances and complexities of real-world applications. As a result, the applicability and effectiveness of this large body of theoretical and methodological work in real-world applications are unclear. In this work, we take steps toward addressing this gap for the domain of public policy. First, we identify the primary use cases of explainable ML within public policy problems. For each use case, we define the end users of explanations and the specific goals the explanations have to fulfill. Finally, we map existing work in explainable ML to these use cases, identify gaps in established capabilities, and propose research directions to fill those gaps to have a practical societal impact through ML. The contribution is (a) a methodology for explainable ML researchers to identify use cases and develop methods targeted at them and (b) using that methodology for the domain of public policy and giving an example for the researchers on developing explainable ML methods that result in real-world impact.
引用
收藏
页数:23
相关论文
共 84 条
[1]  
Abid A, 2022, PR MACH LEARN RES, P66
[2]   Deploying Machine Learning Models for Public Policy: A Framework [J].
Ackermann, Klaus ;
Walsh, Joe ;
De Unanue, Adolfo ;
Naveed, Hareem ;
Rivera, Andrea Navarrete ;
Lee, Sun-Joo ;
Bennett, Jason ;
Defoe, Michael ;
Cody, Crystal ;
Haynes, Lauren ;
Ghani, Rayid .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :15-22
[3]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[4]  
Adebayo Julius, 2020, Advances in Neural Information Processing Systems (NeurIPS 2020), V33, P700
[5]   Explainable AI for Data-Driven Feedback and Intelligent Action Recommendations to Support Students Self-Regulation [J].
Afzaal, Muhammad ;
Nouri, Jalal ;
Zia, Aayesha ;
Papapetrou, Panagiotis ;
Fors, Uno ;
Wu, Yongchao ;
Li, Xiu ;
Weegar, Rebecka .
FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2021, 4
[6]  
Albreiki B, 2022, INT J EDUC TECHNOL H, V19, DOI 10.1186/s41239-022-00354-6
[7]  
[Anonymous], 2016, INT JOINT C ARTIFICI
[8]  
Arya V., 2019, arXiv1909.03012
[9]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[10]  
Baehrens D, 2010, J MACH LEARN RES, V11, P1803