Understanding the black-box: towards interpretable and reliable deep learning models

被引:18
作者
Qamar, Tehreem [1 ]
Bawany, Narmeen Zakaria [1 ]
机构
[1] Jinnah Univ Women, Ctr Comp Res, Dept Comp Sci & Software Engn, Karachi, Pakistan
关键词
Deep learning; Explainable AI; Transfer learning; Pre-trained models; IMAGE CLASSIFICATION;
D O I
10.7717/peerj-cs.1629
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning (DL) has revolutionized the field of artificial intelligence by providing sophisticated models across a diverse range of applications, from image and speech recognition to natural language processing and autonomous driving. However, deep learning models are typically black-box models where the reason for predictions is unknown. Consequently, the reliability of the model becomes questionable in many circumstances. Explainable AI (XAI) plays an important role in improving the transparency and interpretability of the model thereby making it more reliable for real-time deployment. To investigate the reliability and truthfulness of DL models, this research develops image classification models using transfer learning mechanism and validates the results using XAI technique. Thus, the contribution of this research is twofold, we employ three pre-trained models VGG16, MobileNetV2 and ResNet50 using multiple transfer learning techniques for a fruit classification task consisting of 131 classes. Next, we inspect the reliability of models, based on these pre-trained networks, by utilizing Local Interpretable Model-Agnostic Explanations, the LIME, a popular XAI technique that generates explanations for the predictions. Experimental results reveal that transfer learning provides optimized results of around 98% accuracy. The classification of the models is validated on different instances using LIME and it was observed that each model predictions are interpretable and understandable as they are based on pertinent image features that are relevant to particular classes. We believe that this research gives an insight for determining how an interpretation can be drawn from a complex AI model such that its accountability and trustworthiness can be increased.
引用
收藏
页数:21
相关论文
共 53 条
[1]   Review of deep learning: concepts, CNN architectures, challenges, applications, future directions [J].
Alzubaidi, Laith ;
Zhang, Jinglan ;
Humaidi, Amjad J. ;
Al-Dujaili, Ayad ;
Duan, Ye ;
Al-Shamma, Omran ;
Santamaria, J. ;
Fadhel, Mohammed A. ;
Al-Amidie, Muthana ;
Farhan, Laith .
JOURNAL OF BIG DATA, 2021, 8 (01)
[2]   Evaluation of image processing technique as an expert system in mulberry fruit grading based on ripeness level using artificial neural networks (ANNs) and support vector machine (SVM) [J].
Azarmdel, Hossein ;
Jahanbakhshi, Ahmad ;
Mohtasebi, Seyed Saeid ;
Rosado Munoz, Alfredo .
POSTHARVEST BIOLOGY AND TECHNOLOGY, 2020, 166
[3]  
Bhattacharjee S., 2022, P 2022 INT C ART INT, P392
[4]  
Dastin J., 2022, Ethics of data and analytics, P296
[5]   Application of Deep Learning in Neuroradiology: Brain Haemorrhage Classification Using Transfer Learning [J].
Dawud, Awwal Muhammad ;
Yurtkan, Kamil ;
Oztoprak, Huseyin .
COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2019, 2019
[6]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[7]  
Desai C., 2021, INT J ENG COMPUT SCI, V10, P25394, DOI [10.18535/ijecs/v10i9.4622, DOI 10.18535/IJECS/V10I9.4622]
[8]  
Feiyu Xu, 2019, Natural Language Processing and Chinese Computing. 8th CCF International Conference, NLPCC 2019. Proceedings. Lecture Notes in Artificial Intelligence, Subseries of Lecture Notes in Computer Science (LNAI 11839), P563, DOI 10.1007/978-3-030-32236-6_51
[9]   Hyperspectral Image Classification Using Convolutional Neural Networks and Multiple Feature Learning [J].
Gao, Qishuo ;
Lim, Samsung ;
Jia, Xiuping .
REMOTE SENSING, 2018, 10 (02)
[10]  
Ghosh S, 2020, PROCEEDINGS OF 2020 IEEE APPLIED SIGNAL PROCESSING CONFERENCE (ASPCON 2020), P163, DOI [10.1109/aspcon49795.2020.9276669, 10.1109/ASPCON49795.2020.9276669]