Demystifying the black box: an overview of explainability methods in machine learning

被引:1
|
作者
Kinger S. [1 ]
Kulkarni V. [1 ]
机构
[1] Dr. Vishwanath Karad MIT-WPU, Pune
关键词
Black-box; Deep learning; Explainable AI; XAI;
D O I
10.1080/1206212X.2023.2285533
中图分类号
学科分类号
摘要
Deep learning algorithms have achieved remarkable accuracy in various domains such as image classification, face recognition, sentiment analysis, text classification, healthcare, and self-driving vehicles. However, their complex and opaque structures often hinder their adoption in mission-critical applications. The lack of interpretability in these models raises concerns about their reliability and trustworthiness. To address this challenge, Explainable Artificial Intelligence (XAI) methods have emerged to provide human-comprehensible interpretations of AI outcomes. In this paper, we delve into the latest advancements in XAI techniques, focusing on their methodologies, algorithms, and the scope of interpretations they offer. Our study revolves around evaluating these algorithms based on the quality of explanations they generate, their limitations, and their practical applications. By critically assessing their strengths and weaknesses, we aim to shed light on the potential of XAI methods to bridge the gap between high-performance accuracy and interpretability in deep learning models. Through this comprehensive analysis, we aim to provide a deeper understanding of the state-of-the-art XAI techniques, empowering researchers and practitioners to make informed decisions regarding the integration of explainability into their AI systems. © 2023 Informa UK Limited, trading as Taylor & Francis Group.
引用
收藏
页码:90 / 100
页数:10
相关论文
共 50 条
  • [21] Interpretability and Explainability of Machine Learning Models: Achievements and Challenges
    Henriques, J.
    Rocha, T.
    de Carvalho, P.
    Silva, C.
    Paredes, S.
    INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2022, ICBHI 2022, 2024, 108 : 81 - 94
  • [22] Comparing Explanations from Glass-Box and Black-Box Machine-Learning Models
    Kuk, Michal
    Bobek, Szymon
    Nalepa, Grzegorz J.
    COMPUTATIONAL SCIENCE - ICCS 2022, PT III, 2022, 13352 : 668 - 675
  • [23] A social evaluation of the perceived goodness of explainability in machine learning
    Wanner, Jonas
    Herm, Lukas-Valentin
    Heinrich, Kai
    Janiesch, Christian
    JOURNAL OF BUSINESS ANALYTICS, 2022, 5 (01) : 29 - 50
  • [24] Rethinking explainability: toward a postphenomenology of black-box artificial intelligence in medicine
    Annie B. Friedrich
    Jordan Mason
    Jay R. Malone
    Ethics and Information Technology, 2022, 24
  • [25] Guaranteeing Correctness in Black-Box Machine Learning: A Fusion of Explainable AI and Formal Methods for Healthcare Decision-Making
    Khan, Nadia
    Nauman, Muhammad
    Almadhor, Ahmad S.
    Akhtar, Nadeem
    Alghuried, Abdullah
    Alhudhaif, Adi
    IEEE ACCESS, 2024, 12 : 90299 - 90316
  • [26] Overview of Machine Learning-Enabled Battery State Estimation Methods
    Zhuge, Yingjian
    Yang, Hengzhao
    Wang, Haoyu
    2023 IEEE APPLIED POWER ELECTRONICS CONFERENCE AND EXPOSITION, APEC, 2023, : 3028 - 3035
  • [27] Tell Me More: Black Box Explainability for APT Detection on System Provenance Graphs
    Welter, Felix
    Wilkens, Florian
    Fischer, Mathias
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3817 - 3823
  • [28] Transparency, auditability, and explainability of machine learning models in credit scoring
    Buecker, Michael
    Szepannek, Gero
    Gosiewska, Alicja
    Biecek, Przemyslaw
    JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 2022, 73 (01) : 70 - 90
  • [29] Improving the performance of the intrusion detection systems by the machine learning explainability
    Quang-Vinh Dang
    INTERNATIONAL JOURNAL OF WEB INFORMATION SYSTEMS, 2021, 17 (05) : 537 - 555
  • [30] Leveraging explanations in interactive machine learning: An overview
    Teso, Stefano
    Alkan, Oznur
    Stammer, Wolfgang
    Daly, Elizabeth
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6