What transparency for machine learning algorithms?

被引:1
作者
Pégny M. [1 ]
Ibnouhsein I. [2 ]
机构
[1] Université de Paris 1 Panthéon-Sorbonne, IHPST, 13, rue du Four, Paris
[2] Quantmetry, 52 rue d’Anjou, Paris
关键词
Intelligibility; Machine learning; Transparency;
D O I
10.3166/RIA.32.447-478
中图分类号
学科分类号
摘要
Recently, the concept of “algorithmic transparency" has become of primary importance in the public and scientific debates. In the light of the proliferation of uses of the term “transparency", we distinguish two families of fundamental uses of the concept: a descriptive family relating to intrinsic epistemic properties of programs, the first of which are intelligibility and explicability, and a prescriptive family that concerns the normative properties of their uses, the first of which are loyalty and fairness. Because one needs to understand an algorithm in order to explain it and carry out its audit, intelligibility is logically first in the philosophical study of transparency. In order to better determine the challenges of intelligibility in the public use of algorithms, we introduce a distinction between the intelligibility of the procedure and the intelligibility of outputs. Finally, we apply this distinction to the case of machine learning. © 2018 Lavoisier.
引用
收藏
页码:447 / 478
页数:31
相关论文
共 33 条
  • [1] Abdohalli B., Nasraoui O., Explainable restricted Boltzmann machines for collaborative filtering, Proceedings of The 2016 ICML Workshop on Human Interpretability in Machine Learning, (2016)
  • [2] Loi 2016-1321 du 7 Octobre 2016 pour une république numérique, Journal Officiel De La République Française, (2016)
  • [3] Breiman L., Random forests, Machine Learning, 45, 1, pp. 5-32, (2001)
  • [4] Caldini C., Google Est-Il Antisémite? Consulté Sur
  • [5] Éthique De La Recherche En Apprentissage Machine, (2017)
  • [6] Condry N., Meaningful models: Utilizing conceptual structure to improve machine learning interpretability, Proceedings of The 2016 ICML Workshop on Human Interpretability in Machine Learning, (2016)
  • [7] Dhurandhar A., Iyengar V., Luss R., Shanmugam K., A formal framework to characterize interpretability of procedures, Proceedings of The 2017 ICML Workshop on Human Interpretability in Machine Learning, (2017)
  • [8] Doshi-Velez F., Kim B., Towards A Rigorous Science of Interpretable Machine Learning, (2017)
  • [9] Egele M., Scholte T., Kirda E., Kruegel C., A survey on automated dynamic malware-analysis techniques and tools, ACM. Comput. Surv., 44, 2, (2012)
  • [10] Big Data: Seizing Opportunities, Preserving Values, (2014)