Explainable machine learning practices: opening another black box for reliable medical AI

被引:0
|
作者
Emanuele Ratti
Mark Graves
机构
[1] Johannes Kepler University Linz,Institute of PHilosophy and Scientific Method
[2] Parexel AI Labs,Department of Humanities and Arts
[3] Technion Israel Institute of Technology,undefined
来源
AI and Ethics | 2022年 / 2卷 / 4期
关键词
Black box; Machine learning; Medical AI; Reliable AI; Values; Trustworthiness;
D O I
10.1007/s43681-022-00141-z
中图分类号
学科分类号
摘要
In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be interpretable at the algorithmic level to make them trustworthy, as long as they meet some strict empirical desiderata. In this paper, we analyse and develop London’s position. In particular, we make two claims. First, we claim that London’s solution to the problem of trust can potentially address another problem, which is how to evaluate the reliability of ML tools in medicine for regulatory purposes. Second, we claim that to deal with this problem, we need to develop London’s views by shifting the focus from the opacity of algorithmic details to the opacity of the way in which ML tools are trained and built. We claim that to regulate AI tools and evaluate their reliability, agencies need an explanation of how ML tools have been built, which requires documenting and justifying the technical choices that practitioners have made in designing such tools. This is because different algorithmic designs may lead to different outcomes, and to the realization of different purposes. However, given that technical choices underlying algorithmic design are shaped by value-laden considerations, opening the black box of the design process means also making transparent and motivating (technical and ethical) values and preferences behind such choices. Using tools from philosophy of technology and philosophy of science, we elaborate a framework showing how an explanation of the training processes of ML tools in medicine should look like.
引用
收藏
页码:801 / 814
页数:13
相关论文
共 50 条
  • [21] A Comprehensive Exploration of Machine Learning and Explainable AI Techniques for Malware Classification
    Athira
    Baburaj, Drishya
    Gupta, Deepa
    2024 2ND WORLD CONFERENCE ON COMMUNICATION & COMPUTING, WCONF 2024, 2024,
  • [22] Opening the Black Box: How Boolean AI can Support Legal Analysis
    Garzot, Grazia
    Ribes, Stefano
    Palumbo, Alessandro
    2024 4TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND ARTIFICIAL INTELLIGENCE, CCAI 2024, 2024, : 269 - 272
  • [23] Superimposition: Augmenting Machine Learning Outputs with Conceptual Models for Explainable AI
    Lukyanenko, Roman
    Castellanos, Arturo
    Storey, Veda C.
    Castillo, Alfred
    Tremblay, Monica Chiarini
    Parsons, Jeffrey
    ADVANCES IN CONCEPTUAL MODELING, ER 2020, 2020, 12584 : 26 - 34
  • [24] Explainable AI in Machine Learning Regression: Creating Transparency of a Regression Model
    Nakatsu, Robbie T.
    HCI IN BUSINESS, GOVERNMENT AND ORGANIZATIONS, PT I, HCIBGO 2024, 2024, 14720 : 223 - 236
  • [25] Transparent Machine Learning Algorithms for Explainable AI on Motor fMRI Data
    Marques dos Santos, Jose Diogo
    Machado, David
    Fortunato, Manuel
    BIOINFORMATICS AND BIOMEDICAL ENGINEERING, IWBBIO 2023, PT II, 2023, 13920 : 413 - 427
  • [26] Transductive machine learning for reliable medical diagnostics
    Kukar M.
    Grošelj C.
    Journal of Medical Systems, 2005, 29 (1) : 13 - 32
  • [27] Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI
    Holzinger, Andreas
    Kieseberg, Peter
    Weippl, Edgar
    Tjoa, A. Min
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2018, 2018, 11015 : 1 - 8
  • [28] Improving the Interpretability of Asset Pricing Models by Explainable AI: A Machine Learning-based Approach
    Ferrara, Massimiliano
    Ciano, Tiziana
    ECONOMIC COMPUTATION AND ECONOMIC CYBERNETICS STUDIES AND RESEARCH, 2024, 58 (04) : 5 - 19
  • [29] Upon Opening the Black Box and Finding It Full: Exploring the Ethics in Design Practices
    Steen, Marc
    SCIENCE TECHNOLOGY & HUMAN VALUES, 2015, 40 (03) : 389 - 420
  • [30] Intrusion Detection System for AI Box Based on Machine Learning
    Chen, Jiann-Liang
    Chen, Zheng-Zhun
    Chang, Youg-Sheng
    Li, Ching-Iang
    Kao, Tien-I
    Lin, Yu-Ting
    Xiao, Yu-Yi
    Qiu, Jian-Fu
    2023 25TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY, ICACT, 2023, : 111 - 116