Demystifying the black box: an overview of explainability methods in machine learning

被引:1
作者
Kinger S. [1 ]
Kulkarni V. [1 ]
机构
[1] Dr. Vishwanath Karad MIT-WPU, Pune
关键词
Black-box; Deep learning; Explainable AI; XAI;
D O I
10.1080/1206212X.2023.2285533
中图分类号
学科分类号
摘要
Deep learning algorithms have achieved remarkable accuracy in various domains such as image classification, face recognition, sentiment analysis, text classification, healthcare, and self-driving vehicles. However, their complex and opaque structures often hinder their adoption in mission-critical applications. The lack of interpretability in these models raises concerns about their reliability and trustworthiness. To address this challenge, Explainable Artificial Intelligence (XAI) methods have emerged to provide human-comprehensible interpretations of AI outcomes. In this paper, we delve into the latest advancements in XAI techniques, focusing on their methodologies, algorithms, and the scope of interpretations they offer. Our study revolves around evaluating these algorithms based on the quality of explanations they generate, their limitations, and their practical applications. By critically assessing their strengths and weaknesses, we aim to shed light on the potential of XAI methods to bridge the gap between high-performance accuracy and interpretability in deep learning models. Through this comprehensive analysis, we aim to provide a deeper understanding of the state-of-the-art XAI techniques, empowering researchers and practitioners to make informed decisions regarding the integration of explainability into their AI systems. © 2023 Informa UK Limited, trading as Taylor & Francis Group.
引用
收藏
页码:90 / 100
页数:10
相关论文
共 61 条
[1]  
Bala Manoj Kumar P., Perumal R.S., Nadesh R.K., Et al., Type 2: diabetes mellitus prediction using deep neural networks classifier, Int J Cogn Comput Eng, 1, pp. 55-61, (2020)
[2]  
Zhu W., Xie L., Han J., Et al., (2020)
[3]  
Liu Y., Jain A., Eng C., Et al., A deep learning system for differential diagnosis of skin diseases, Nat Med, 26, 6, pp. 900-908, (2020)
[4]  
You C., Lu J., Filev D., Et al., Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning, Rob Auton Syst, 114, pp. 1-18, (2019)
[5]  
Grigorescu S., Trasnea B., Cocias T., Et al., A survey of deep learning techniques for autonomous driving, J Field Rob, 37, 3, pp. 362-386, (2020)
[6]  
Khan A., Sohail A., Zahoora U., Et al., A survey of the recent architectures of deep convolutional neural networks, Artif Intell Rev, 53, 8, pp. 5455-5516, (2020)
[7]  
Castelvecchi D., Can we open the black box of AI?, Nat News, 538, 7623, pp. 20-23, (2016)
[8]  
Lipton Z.C., The mythos of model interpretability, Queue, 16, 3, pp. 31-57, (2018)
[9]  
Kroll J.A., Huey J., Barocas S., Et al., Accountable algorithms, Univ Penn Law Rev, 165, pp. 633-705, (2017)
[10]  
Danks D., London A.J., Regulating autonomous systems: beyond standards, IEEE Intell Syst, 32, 1, pp. 88-91, (2017)