Regional Tree Regularization for Interpretability in Deep Neural Networks

被引:0
作者
Wu, Mike [1 ]
Parbhoo, Sonali [2 ,3 ]
Hughes, Michael C. [4 ]
Kindle, Ryan [5 ]
Celi, Leo [6 ]
Zazzi, Maurizio [7 ]
Roth, Volker [2 ]
Doshi-Velez, Finale [3 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
[2] Univ Basel, Basel, Switzerland
[3] Harvard Univ, SEAS, Cambridge, MA 02138 USA
[4] Tufts Univ, Medford, MA 02155 USA
[5] Massachusetts Gen Hosp, Boston, MA 02114 USA
[6] MIT, Cambridge, MA 02139 USA
[7] Univ Siena, Siena, Italy
来源
THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2020年 / 34卷
基金
瑞士国家科学基金会;
关键词
PREDICTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The lack of interpretability remains a barrier to adopting deep neural networks across many safety-critical domains. Tree regularization was recently proposed to encourage a deep neural network's decisions to resemble those of a globally compact, axis-aligned decision tree. However, it is often unreasonable to expect a single tree to predict well across all possible inputs. In practice, doing so could lead to neither interpretable nor performant optima. To address this issue, we propose regional tree regularization - a method that encourages a deep model to be well-approximated by several separate decision trees specific to predefined regions of the input space. Across many datasets, including two healthcare applications, we show our approach delivers simpler explanations than other regularization schemes without compromising accuracy. Specifically, our regional regularizer finds many more "desirable" optima compared to global analogues.
引用
收藏
页码:6413 / 6421
页数:9
相关论文
共 50 条
[21]   Applications of neural networks and deep learning to biomedical engineering [J].
Luis Sarmiento-Ramos, Jose .
UIS INGENIERIAS, 2020, 19 (04) :1-18
[22]   Chemical space exploration guided by deep neural networks [J].
Karlov, Dmitry S. ;
Sosnin, Sergey ;
Tetko, Igor V. ;
Fedorov, Maxim V. .
RSC ADVANCES, 2019, 9 (09) :5151-5157
[23]   A greenhouse modeling and control using deep neural networks [J].
Salah, Latifa Belhaj ;
Fourati, Fathi .
APPLIED ARTIFICIAL INTELLIGENCE, 2021, 35 (15) :1905-1929
[24]   ProteInfer, deep neural networks for protein functional inference [J].
Sanderson, Theo ;
Bileschi, Maxwell L. ;
Belanger, David ;
Colwell, Lucy J. ;
Doetsch, Volker .
ELIFE, 2023, 12
[25]   Computational Protein Design with Deep Learning Neural Networks [J].
Wang, Jingxue ;
Cao, Huali ;
Zhang, John Z. H. ;
Qi, Yifei .
SCIENTIFIC REPORTS, 2018, 8
[26]   Deep neural networks for human microRNA precursor detection [J].
Zheng, Xueming ;
Fu, Xingli ;
Wang, Kaicheng ;
Wang, Meng .
BMC BIOINFORMATICS, 2020, 21 (01)
[27]   Bayesian neural networks with physics-aware regularization for probabilistic travel time modeling [J].
Olivier, Audrey ;
Mohammadi, Sevin ;
Smyth, Andrew W. W. ;
Adams, Matt .
COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2023, 38 (18) :2614-2631
[28]   Interpretability of deep learning models for crop yield forecasting [J].
Paudel, Dilli ;
de Wit, Allard ;
Boogaard, Hendrik ;
Marcos, Diego ;
Osinga, Sjoukje ;
Athanasiadis, Ioannis N. .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2023, 206
[29]   Identification of groundwater contamination sources and hydraulic parameters based on bayesian regularization deep neural network [J].
Pan, Zidong ;
Lu, Wenxi ;
Fan, Yue ;
Li, Jiuhui .
ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH, 2021, 28 (13) :16867-16879
[30]   TREE ENSEMBLES WITH RULE STRUCTURED HORSESHOE REGULARIZATION [J].
Nalenz, Malte ;
Villani, Mattias .
ANNALS OF APPLIED STATISTICS, 2018, 12 (04) :2379-2408