Algorithms for Interpretable Machine Learning

被引:39
作者
Rudin, Cynthia [1 ]
机构
[1] MIT, 100 Main St,E62,Room 576, Cambridge, MA 02142 USA
来源
PROCEEDINGS OF THE 20TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'14) | 2014年
关键词
Machine Learning; Interpretability; Comprehensibility; Understandability; Sparsity; Medical Calculators;
D O I
10.1145/2623330.2630823
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is extremely important in many application domains to have transparency in predictive modeling. Domain experts do not tend to prefer "black box" predictive model models. They would like to understand how predictions are made, and possibly, prefer models that emulate the way a human expert might make a decision, with a few important variables, and a clear convincing reason to make a particular prediction. I will discuss recent work on interpretable predictive modeling with decision lists and sparse integer linear models. I will describe several approaches, including an algorithm based on discrete optimization, and an algorithm based on Bayesian analysis. I will show examples of interpretable models for stroke prediction in medical patients and prediction of violent crime in young people raised in out-of-home care. Collaborators are Ben Letham, Berk Ustun, Stefano Traca, Siong Thye Goh, Tyler McCormick, and David Madigan.
引用
收藏
页码:1519 / 1519
页数:1
相关论文
empty
未找到相关数据