Explainable machine learning practices: opening another black box for reliable medical AI

被引:0
|
作者
Emanuele Ratti
Mark Graves
机构
[1] Johannes Kepler University Linz,Institute of PHilosophy and Scientific Method
[2] Parexel AI Labs,Department of Humanities and Arts
[3] Technion Israel Institute of Technology,undefined
来源
AI and Ethics | 2022年 / 2卷 / 4期
关键词
Black box; Machine learning; Medical AI; Reliable AI; Values; Trustworthiness;
D O I
10.1007/s43681-022-00141-z
中图分类号
学科分类号
摘要
In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be interpretable at the algorithmic level to make them trustworthy, as long as they meet some strict empirical desiderata. In this paper, we analyse and develop London’s position. In particular, we make two claims. First, we claim that London’s solution to the problem of trust can potentially address another problem, which is how to evaluate the reliability of ML tools in medicine for regulatory purposes. Second, we claim that to deal with this problem, we need to develop London’s views by shifting the focus from the opacity of algorithmic details to the opacity of the way in which ML tools are trained and built. We claim that to regulate AI tools and evaluate their reliability, agencies need an explanation of how ML tools have been built, which requires documenting and justifying the technical choices that practitioners have made in designing such tools. This is because different algorithmic designs may lead to different outcomes, and to the realization of different purposes. However, given that technical choices underlying algorithmic design are shaped by value-laden considerations, opening the black box of the design process means also making transparent and motivating (technical and ethical) values and preferences behind such choices. Using tools from philosophy of technology and philosophy of science, we elaborate a framework showing how an explanation of the training processes of ML tools in medicine should look like.
引用
收藏
页码:801 / 814
页数:13
相关论文
共 50 条
  • [31] Prediction of Students' Adaptability Using Explainable AI in Educational Machine Learning Models
    Nnadi, Leonard Chukwualuka
    Watanobe, Yutaka
    Rahman, Md. Mostafizer
    John-Otumu, Adetokunbo Macgregor
    APPLIED SCIENCES-BASEL, 2024, 14 (12):
  • [32] A Machine Learning and Explainable AI Approach for Predicting Secondary School Student Performance
    Hasib, Khan Md
    Rahman, Farhana
    Hasnat, Rashik
    Alam, Md Golam Rabiul
    2022 IEEE 12TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2022, : 399 - 405
  • [33] Explainable AI for Symptom-Based Detection of Monkeypox: a machine learning approach
    Setegn, Gizachew Mulu
    Dejene, Belayneh Endalamaw
    BMC INFECTIOUS DISEASES, 2025, 25 (01)
  • [34] The Future of Fuzzy Sets in Finance: New Challenges in Machine Learning and Explainable AI
    Muzzioli, Silvia
    FUZZY LOGIC AND APPLICATIONS, WILF 2018, 2019, 11291 : 265 - 268
  • [35] Automatic Modeling of Logic Device Performance Based on Machine Learning and Explainable AI
    Kim, Seungju
    Lee, Kwangseok
    Noh, Hyeon-Kyun
    Shin, Youngkyu
    Chang, Kyu-Baik
    Jeong, Jaehoon
    Baek, Sangwon
    Kang, Myunggil
    Cho, Keunhwi
    Kim, Dong-Won
    Kim, Daesin
    2020 INTERNATIONAL CONFERENCE ON SIMULATION OF SEMICONDUCTOR PROCESSES AND DEVICES (SISPAD 2020), 2020, : 47 - 50
  • [36] A Standard Baseline for Software Defect Prediction: Using Machine Learning and Explainable AI
    Bommi, Nitin Sai
    Negi, Atul
    2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC, 2023, : 1798 - 1803
  • [37] Global and local interpretability techniques of supervised machine learning black box models for numerical medical data
    Hakkoum, Hajar
    Idri, Ali
    Abnane, Ibtissam
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 131
  • [38] The three ghosts of medical AI: Can the black-box present deliver?
    Quinn, Thomas P.
    Jacobs, Stephan
    Senadeera, Manisha
    Le, Vuong
    Coghlan, Simon
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2022, 124
  • [39] Extending machine learning prediction capabilities by explainable AI in financial time series prediction
    Celik, Taha Bugra
    Ican, Ozgur
    Bulut, Elif
    APPLIED SOFT COMPUTING, 2023, 132
  • [40] Optimization of Wearable Biosensor Data for Stress Classification Using Machine Learning and Explainable AI
    Shikha, Shikha
    Sethia, Divyashikha
    Indu, S.
    IEEE ACCESS, 2024, 12 : 169310 - 169327