What transparency for machine learning algorithms?

被引:1
作者
Pégny M. [1 ]
Ibnouhsein I. [2 ]
机构
[1] Université de Paris 1 Panthéon-Sorbonne, IHPST, 13, rue du Four, Paris
[2] Quantmetry, 52 rue d’Anjou, Paris
关键词
Intelligibility; Machine learning; Transparency;
D O I
10.3166/RIA.32.447-478
中图分类号
学科分类号
摘要
Recently, the concept of “algorithmic transparency" has become of primary importance in the public and scientific debates. In the light of the proliferation of uses of the term “transparency", we distinguish two families of fundamental uses of the concept: a descriptive family relating to intrinsic epistemic properties of programs, the first of which are intelligibility and explicability, and a prescriptive family that concerns the normative properties of their uses, the first of which are loyalty and fairness. Because one needs to understand an algorithm in order to explain it and carry out its audit, intelligibility is logically first in the philosophical study of transparency. In order to better determine the challenges of intelligibility in the public use of algorithms, we introduce a distinction between the intelligibility of the procedure and the intelligibility of outputs. Finally, we apply this distinction to the case of machine learning. © 2018 Lavoisier.
引用
收藏
页码:447 / 478
页数:31
相关论文
共 33 条
  • [11] Goodman B., Flaxman S., EU regulations on algorithmic decision-making and a "right to explanation, Proceedings of The 2016 ICML Workshop on Human Interpretability in Machine Learning, (2016)
  • [12] Guestrin M.T.R.M., Introduction to Local Interpretable Model-Agnostic Explanations (LIME), (2016)
  • [13] Gunning D., Explainable Artificial Intelligence (XAI)
  • [14] Hara S., Hayashi K., Making tree ensembles interpretable, Proceedings of The 2016 ICML Workshop on Human Interpretability in Machine Learning, (2016)
  • [15] Hoffbeck J.P., Landgrebe D.A., Covariance matrix estimation and classification with limited training data, IEEE Trans. Pattern Anal. Mach. Intell., 18, 7, pp. 763-767, (1996)
  • [16] Hornik K., Stinchcombe M.B., White H., Multilayer feedforward networks are universal approximators, Neural Networks, 2, pp. 359-366, (1989)
  • [17] TransAlgo: Évaluer La Responsabilité Et La Transparence Des Systèmes Algorithmiques, (2017)
  • [18] Kendall A., Gal Y., What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?, (2017)
  • [19] Knight W., The dark secret at the heart of Ia, The MIT Technological Review, 120, 3, (2017)
  • [20] Krause J., Perer A., Bertini E., Using visual analytics to interpret predictive machine learning models, Proceedings of The 2016 ICML Workshop on Human Interpretability in Machine Learning, (2016)