Demystifying the black box: an overview of explainability methods in machine learning

被引:1
|
作者
Kinger S. [1 ]
Kulkarni V. [1 ]
机构
[1] Dr. Vishwanath Karad MIT-WPU, Pune
关键词
Black-box; Deep learning; Explainable AI; XAI;
D O I
10.1080/1206212X.2023.2285533
中图分类号
学科分类号
摘要
Deep learning algorithms have achieved remarkable accuracy in various domains such as image classification, face recognition, sentiment analysis, text classification, healthcare, and self-driving vehicles. However, their complex and opaque structures often hinder their adoption in mission-critical applications. The lack of interpretability in these models raises concerns about their reliability and trustworthiness. To address this challenge, Explainable Artificial Intelligence (XAI) methods have emerged to provide human-comprehensible interpretations of AI outcomes. In this paper, we delve into the latest advancements in XAI techniques, focusing on their methodologies, algorithms, and the scope of interpretations they offer. Our study revolves around evaluating these algorithms based on the quality of explanations they generate, their limitations, and their practical applications. By critically assessing their strengths and weaknesses, we aim to shed light on the potential of XAI methods to bridge the gap between high-performance accuracy and interpretability in deep learning models. Through this comprehensive analysis, we aim to provide a deeper understanding of the state-of-the-art XAI techniques, empowering researchers and practitioners to make informed decisions regarding the integration of explainability into their AI systems. © 2023 Informa UK Limited, trading as Taylor & Francis Group.
引用
收藏
页码:90 / 100
页数:10
相关论文
共 50 条
  • [1] In-Training Explainability Frameworks: A Method to Make Black-Box Machine Learning Models More Explainable
    Acun, Cagla
    Nasraoui, Olfa
    2023 IEEE INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY, WI-IAT, 2023, : 230 - 237
  • [2] The fifty shades of black: about black box AI and explainability in healthcare
    Raposo, Vera Lucia
    MEDICAL LAW REVIEW, 2025, 33 (01)
  • [3] Demystifying Black-box Learning Models of Rumor Detection from Social Media Posts
    Tafannum, Faiza
    Shopnil, Mir Nafis Sharear
    Salsabil, Anika
    Ahmed, Navid
    Alam, Md Golam Rabiul
    Reza, Md Tanzim
    2021 IEEE 12TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2021, : 358 - 364
  • [4] Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning
    Emrullah ŞAHiN
    Naciye Nur Arslan
    Durmuş Özdemir
    Neural Computing and Applications, 2025, 37 (2) : 859 - 965
  • [5] A-XAI: adversarial machine learning for trustable explainability
    Nishita Agrawal
    Isha Pendharkar
    Jugal Shroff
    Jatin Raghuvanshi
    Akashdip Neogi
    Shruti Patil
    Rahee Walambe
    Ketan Kotecha
    AI and Ethics, 2024, 4 (4): : 1143 - 1174
  • [6] Directive Explanations for Actionable Explainability in Machine Learning Applications
    Singh, Ronal
    Miller, Tim
    Lyons, Henrietta
    Sonenberg, Liz
    Velloso, Eduardo
    Vetere, Frank
    Howe, Piers
    Dourish, Paul
    ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2023, 13 (04)
  • [7] Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
    Herm, Lukas-Valentin
    Heinrich, Kai
    Wanner, Jonas
    Janiesch, Christian
    INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2023, 69
  • [8] A Survey on the Explainability of Supervised Machine Learning
    Burkart, Nadia
    Huber, Marco F.
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2021, 70 : 245 - 317
  • [9] Demystifying machine learning: a primer for physicians
    Scott, Ian A.
    INTERNAL MEDICINE JOURNAL, 2021, 51 (09) : 1388 - 1400
  • [10] Machine Learning Methods for Remote Sensing Applications: An Overview
    Schulz, Karsten
    Haensch, Ronny
    Soergel, Uwe
    EARTH RESOURCES AND ENVIRONMENTAL REMOTE SENSING/GIS APPLICATIONS IX, 2018, 10790