Beyond model interpretability: socio-structural explanations in machine learning

被引:0
作者
Smart, Andrew [1 ]
Kasirzadeh, Atoosa [2 ]
机构
[1] Google Res, San Francisco, CA 94105 USA
[2] Univ Edinburgh, Edinburgh, Scotland
关键词
Machine learning; Interpretability; Explainability; Social structures; Social structural explanations; Responsible AI; RACIAL BIAS; HEALTH;
D O I
10.1007/s00146-024-02056-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
What is it to interpret the outputs of an opaque machine learning model? One approach is to develop interpretable machine learning techniques. These techniques aim to show how machine learning models function by providing either model-centric local or global explanations, which can be based on mechanistic interpretations (revealing the inner working mechanisms of models) or non-mechanistic approximations (showing input feature-output data relationships). In this paper, we draw on social philosophy to argue that interpreting machine learning outputs in certain normatively salient domains could require appealing to a third type of explanation that we call "socio-structural" explanation. The relevance of this explanation type is motivated by the fact that machine learning models are not isolated entities but are embedded within and shaped by social structures. Socio-structural explanations aim to illustrate how social structures contribute to and partially explain the outputs of machine learning models. We demonstrate the importance of socio-structural explanations by examining a racially biased healthcare allocation algorithm. Our proposal highlights the need for transparency beyond model interpretability: understanding the outputs of machine learning systems could require a broader analysis that extends beyond the understanding of the machine learning model itself.
引用
收藏
页码:2045 / 2053
页数:9
相关论文
共 69 条
  • [11] Transparency in Complex Computational Systems
    Creel, Kathleen A.
    [J]. PHILOSOPHY OF SCIENCE, 2020, 87 (04) : 568 - 589
  • [12] Doshi-Velez F, 2017, Arxiv, DOI arXiv:1702.08608
  • [13] Explainable AI (XAI): Core Ideas, Techniques, and Solutions
    Dwivedi, Rudresh
    Dave, Devam
    Naik, Het
    Singhal, Smiti
    Omer, Rana
    Patel, Pankesh
    Qian, Bin
    Wen, Zhenyu
    Shah, Tejal
    Morgan, Graham
    Ranjan, Rajiv
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (09)
  • [14] Elhage N, 2021, A mathematical framework for transformer circuits
  • [15] Elhage N., 2022, arXiv
  • [16] Epstein B., 2015, The Ant Trap: Rebuilding the Foundations of the Social Sciences, DOI [10.1093/acprof:oso/9780199381104.001.0001, DOI 10.1093/ACPROF:OSO/9780199381104.001.0001]
  • [17] Greedy function approximation: A gradient boosting machine
    Friedman, JH
    [J]. ANNALS OF STATISTICS, 2001, 29 (05) : 1189 - 1232
  • [18] Garfinkel Alan, 1981, FORMS EXPLANATION RE
  • [19] Ghorbani A, 2019, AAAI CONF ARTIF INTE, P3681