Demystifying the black box: an overview of explainability methods in machine learning

被引:1
|
作者
Kinger S. [1 ]
Kulkarni V. [1 ]
机构
[1] Dr. Vishwanath Karad MIT-WPU, Pune
关键词
Black-box; Deep learning; Explainable AI; XAI;
D O I
10.1080/1206212X.2023.2285533
中图分类号
学科分类号
摘要
Deep learning algorithms have achieved remarkable accuracy in various domains such as image classification, face recognition, sentiment analysis, text classification, healthcare, and self-driving vehicles. However, their complex and opaque structures often hinder their adoption in mission-critical applications. The lack of interpretability in these models raises concerns about their reliability and trustworthiness. To address this challenge, Explainable Artificial Intelligence (XAI) methods have emerged to provide human-comprehensible interpretations of AI outcomes. In this paper, we delve into the latest advancements in XAI techniques, focusing on their methodologies, algorithms, and the scope of interpretations they offer. Our study revolves around evaluating these algorithms based on the quality of explanations they generate, their limitations, and their practical applications. By critically assessing their strengths and weaknesses, we aim to shed light on the potential of XAI methods to bridge the gap between high-performance accuracy and interpretability in deep learning models. Through this comprehensive analysis, we aim to provide a deeper understanding of the state-of-the-art XAI techniques, empowering researchers and practitioners to make informed decisions regarding the integration of explainability into their AI systems. © 2023 Informa UK Limited, trading as Taylor & Francis Group.
引用
收藏
页码:90 / 100
页数:10
相关论文
共 50 条
  • [41] LINDA-BN: An interpretable probabilistic approach for demystifying black-box predictive models
    Moreira, Catarina
    Chou, Yu-Liang
    Velmurugan, Mythreyi
    Ouyang, Chun
    Sindhgatta, Renuka
    Bruza, Peter
    DECISION SUPPORT SYSTEMS, 2021, 150
  • [42] How does the model make predictions? A systematic literature review on the explainability power of machine learning in healthcare
    Allgaier, Johannes
    Mulansky, Lena
    Draelos, Rachel Lea
    Pryss, Ruediger
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2023, 143
  • [43] Application of ″Black Box″ to fault diagnosis of rotating machine
    张金祥
    徐敏强
    黄文虎
    张佳莺
    张国斌
    Journal of Harbin Institute of Technology, 2001, (01) : 83 - 86
  • [44] Demystifying Thermal Comfort in Smart Buildings: An Interpretable Machine Learning Approach
    Zhang, Wei
    Wen, Yonggang
    Tseng, King Jet
    Jin, Guangyu
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (10) : 8021 - 8031
  • [45] Explainability of Machine Learning Models with XGBoost and SHAP Values in the Context of Coping with Disasters
    Teixeira, Lucas
    Matos, Augusto
    Carvalho, Gabriel
    Valencio, Norma
    Camargo, Heloisa
    INTELLIGENT SYSTEMS, BRACIS 2024, PT IV, 2025, 15415 : 152 - 166
  • [46] Global and local interpretability techniques of supervised machine learning black box models for numerical medical data
    Hakkoum, Hajar
    Idri, Ali
    Abnane, Ibtissam
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 131
  • [47] ComplAI: Framework for Multi-factor Assessment of Black-Box Supervised Machine Learning Models
    De, Arkadipta
    Gudipudi, Satya Swaroop
    Panchanan, Sourab
    Desarkar, Maunendra Sankar
    38TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2023, 2023, : 1096 - 1099
  • [48] Detecting anomalies in blockchain transactions using machine learning classifiers and explainability analysis
    Hasan, Mohammad
    Rahman, Mohammad Shahriar
    Janicke, Helge
    Sarker, Iqbal H.
    BLOCKCHAIN-RESEARCH AND APPLICATIONS, 2024, 5 (03):
  • [49] Reproducibility and explainability in digital pathology: The need to make black-box artificial intelligence systems more transparent
    Faa, Gavino
    Fraschini, Matteo
    Barberini, Luigi
    JOURNAL OF PUBLIC HEALTH RESEARCH, 2024, 13 (04)
  • [50] Testing Reinforcement Learning Explainability Methods in a Multi-Agent Cooperative Environment
    Domenech i Vila, Marc
    Gnatyshak, Dmitry
    Tormos, Adrian
    Alvarez-Napagao, Sergio
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2022, 356 : 355 - 364