Explainable machine learning practices: opening another black box for reliable medical AI

被引:0
|
作者
Emanuele Ratti
Mark Graves
机构
[1] Johannes Kepler University Linz,Institute of PHilosophy and Scientific Method
[2] Parexel AI Labs,Department of Humanities and Arts
[3] Technion Israel Institute of Technology,undefined
来源
AI and Ethics | 2022年 / 2卷 / 4期
关键词
Black box; Machine learning; Medical AI; Reliable AI; Values; Trustworthiness;
D O I
10.1007/s43681-022-00141-z
中图分类号
学科分类号
摘要
In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be interpretable at the algorithmic level to make them trustworthy, as long as they meet some strict empirical desiderata. In this paper, we analyse and develop London’s position. In particular, we make two claims. First, we claim that London’s solution to the problem of trust can potentially address another problem, which is how to evaluate the reliability of ML tools in medicine for regulatory purposes. Second, we claim that to deal with this problem, we need to develop London’s views by shifting the focus from the opacity of algorithmic details to the opacity of the way in which ML tools are trained and built. We claim that to regulate AI tools and evaluate their reliability, agencies need an explanation of how ML tools have been built, which requires documenting and justifying the technical choices that practitioners have made in designing such tools. This is because different algorithmic designs may lead to different outcomes, and to the realization of different purposes. However, given that technical choices underlying algorithmic design are shaped by value-laden considerations, opening the black box of the design process means also making transparent and motivating (technical and ethical) values and preferences behind such choices. Using tools from philosophy of technology and philosophy of science, we elaborate a framework showing how an explanation of the training processes of ML tools in medicine should look like.
引用
收藏
页码:801 / 814
页数:13
相关论文
共 50 条
  • [41] Users' trust in black-box machine learning algorithms
    Nakashima, Heitor Hoffman
    Mantovani, Daielly
    Machado Junior, Celso
    REGE-REVISTA DE GESTAO, 2024, 31 (02): : 237 - 250
  • [42] Nondestructive Prediction of Eggshell Thickness Using NIR Spectroscopy and Machine Learning with Explainable AI
    Ahmed, Md Wadud
    Alam, Sreezan
    Khaliduzzaman, Alin
    Emmert, Jason Lee
    Kamruzzaman, Mohammed
    ACS FOOD SCIENCE & TECHNOLOGY, 2025, 5 (02): : 822 - 832
  • [43] Machine Learning for Black-Box Fuzzing of Network Protocols
    Fan, Rong
    Chang, Yaoyao
    INFORMATION AND COMMUNICATIONS SECURITY, ICICS 2017, 2018, 10631 : 621 - 632
  • [44] Demystifying the black box: an overview of explainability methods in machine learning
    Kinger S.
    Kulkarni V.
    International Journal of Computers and Applications, 2024, 46 (02) : 90 - 100
  • [45] Empowering Glioma Prognosis With Transparent Machine Learning and Interpretative Insights Using Explainable AI
    Palkar, Anisha
    Dias, Cifha Crecil
    Chadaga, Krishnaraj
    Sampathila, Niranjana
    IEEE ACCESS, 2024, 12 : 31697 - 31718
  • [46] Explainable and Fair AI: Balancing Performance in Financial and Real Estate Machine Learning Models
    Acharya, Deepak Bhaskar
    Divya, B.
    Kuppan, Karthigeyan
    IEEE ACCESS, 2024, 12 : 154022 - 154034
  • [47] The role of explainable AI in enhancing breast cancer diagnosis using machine learning and deep learning models
    Zulfikar Ali Ansari
    Manish Madhava Tripathi
    Rafeeq Ahmed
    Discover Artificial Intelligence, 5 (1):
  • [48] Effective depression detection and interpretation: Integrating machine learning, deep learning, language models, and explainable AI
    Al Masud, Gazi Hasan
    Shanto, Rejaul Islam
    Sakin, Ishmam
    Kabir, Muhammad Rafsan
    ARRAY, 2025, 25
  • [49] Predicting glioma grades: integrating clinical and molecular data with machine learning and explainable AI
    Tuysuzoglu, Goksu
    Tokmak, Ozge Kart
    INTERNATIONAL JOURNAL OF INTELLIGENT ENGINEERING INFORMATICS, 2024, 12 (04)
  • [50] AI2: a novel explainable machine learning framework using an NLP interface
    Dessureault, Jean-Sebastien
    Massicotte, Daniel
    PROCEEDINGS OF 2023 8TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING TECHNOLOGIES, ICMLT 2023, 2023, : 1 - 7