共 29 条
- [1] Ribeiro MT, Singh S, Guestrin C., Why should I trust you?”: Explaining the predictions of any classifier, Proc. of the 22nd ACM SIGKDD Int’l Conf. on Knowledge Discovery and Data Mining, pp. 1135-1144, (2016)
- [2] Pedreschi D, Giannotti F, Guidotti R, Monreale A, Ruggieri S, Turini F., Meaningful explanations of black box AI decision systems, Proc. of the 33rd AAAI Conf. on Artificial Intelligence and 31st Innovative Applications of Artificial Intelligence Conf. and 9th AAAI Symp. on Educational Advances in Artificial Intelligence, (2019)
- [3] Lundberg SM, Lee SI., A unified approach to interpreting model predictions, Proc. of the 31st Int’l Conf. on Neural Information Processing Systems, pp. 4768-4774, (2017)
- [4] Ribeiro MT, Singh S, Guestrin C., Anchors: High-precision model-agnostic explanations, Proc. of the 32nd AAAI Conf. on Artificial Intelligence and 13th Innovative Applications of Artificial Intelligence Conf. and 8th AAAI Symp. on Educational Advances in Artificial Intelligence, (2018)
- [5] Lakkaraju H, Kamar E, Caruana R, Leskovec J., Faithful and customizable explanations of black box models, Proc. of the 2019 AAAI/ACM Conf. on AI, Ethics, and Society, pp. 131-138, (2019)
- [6] Bach S, Binder A, Montavon G, Klauschen F, Muller KR, Samek W., On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, PLoS One, 10, 7, (2015)
- [7] Shrikumar A, Greenside P, Kundaje A., Learning important features through propagating activation differences, Proc. of the 34th Int’l Conf. on Machine Learning, pp. 3145-3153, (2017)
- [8] Amini A, Schwarting W, Soleimany A, Rus D., Deep evidential regression, Proc. of the 34th Int’l Conf. on Neural Information Processing Systems, (2020)
- [9] Camburu OM., Explaining deep neural networks, (2020)
- [10] Fan M, Wei WY, Xie XF, Liu Y, Guan XH, Liu T., Can we trust your explanations? Sanity checks for interpreters in android malware analysis, IEEE Trans. on Information Forensics and Security, 16, pp. 838-853, (2020)