Interpretable Decision Sets: A Joint Framework for Description and Prediction

被引:424
作者
Lakkaraju, Himabindu [1 ]
Bach, Stephen H. [1 ]
Leskovec, Jure [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
来源
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING | 2016年
基金
美国国家科学基金会;
关键词
SUBGROUP DISCOVERY; CONTRAST SET; RULES;
D O I
10.1145/2939672.2939874
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model's prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency.
引用
收藏
页码:1675 / 1684
页数:10
相关论文
共 56 条
[1]  
Agrawal R., 1993, SIGMOD Record, V22, P207, DOI 10.1145/170036.170072
[2]  
Agrawal R., 1994, P 20 INT C VER LARG, V1215, P487, DOI DOI 10.5555/645920.672836
[3]  
[Anonymous], 1953, Psychometrika, DOI [10.1007/BF02289025, DOI 10.1007/BF02289025]
[4]  
[Anonymous], 2015, ARXIV150407614
[5]  
[Anonymous], 2008, INT C WORLD WIDE WEB, DOI DOI 10.1145/1367497.1367524
[6]  
[Anonymous], 2014, NIPS
[7]   Rules for contrast sets [J].
Azevedo, Paulo J. .
INTELLIGENT DATA ANALYSIS, 2010, 14 (06) :623-640
[8]  
Bay Stephen D., 1999, KDD
[9]  
Bertsimas D., 2011, 38611 OR MIT OP RES
[10]  
Bien J., 2009, ARXIV09082284STATML