共 33 条
- [11] Goodman B., Flaxman S., EU regulations on algorithmic decision-making and a "right to explanation, Proceedings of The 2016 ICML Workshop on Human Interpretability in Machine Learning, (2016)
- [12] Guestrin M.T.R.M., Introduction to Local Interpretable Model-Agnostic Explanations (LIME), (2016)
- [13] Gunning D., Explainable Artificial Intelligence (XAI)
- [14] Hara S., Hayashi K., Making tree ensembles interpretable, Proceedings of The 2016 ICML Workshop on Human Interpretability in Machine Learning, (2016)
- [15] Hoffbeck J.P., Landgrebe D.A., Covariance matrix estimation and classification with limited training data, IEEE Trans. Pattern Anal. Mach. Intell., 18, 7, pp. 763-767, (1996)
- [16] Hornik K., Stinchcombe M.B., White H., Multilayer feedforward networks are universal approximators, Neural Networks, 2, pp. 359-366, (1989)
- [17] TransAlgo: Évaluer La Responsabilité Et La Transparence Des Systèmes Algorithmiques, (2017)
- [18] Kendall A., Gal Y., What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?, (2017)
- [19] Knight W., The dark secret at the heart of Ia, The MIT Technological Review, 120, 3, (2017)
- [20] Krause J., Perer A., Bertini E., Using visual analytics to interpret predictive machine learning models, Proceedings of The 2016 ICML Workshop on Human Interpretability in Machine Learning, (2016)