Toward Transparent AI for Neurological Disorders: A Feature Extraction and Relevance Analysis Framework

被引:3
作者
Woodbright, Mitchell D. [1 ]
Morshed, Ahsan [2 ]
Browne, Matthew [3 ]
Ray, Biplob [2 ]
Moore, Steven [4 ]
机构
[1] Cent Queensland Univ, Sch Engn & Technol, Bundaberg, Qld 4670, Australia
[2] Cent Queensland Univ, Sch Engn & Technol, Melbourne, Vic 3000, Australia
[3] Cent Queensland Univ, Sch Med Hlth & Appl Sci, Bundaberg, Qld 4670, Australia
[4] Cent Queensland Univ, Sch Engn & Technol, Rockhampton, Qld 4702, Australia
关键词
Feature extraction; Convolutional neural networks; Neurological diseases; Medical diagnostic imaging; Neurons; Alzheimer's disease; Brain cancer; Tumors; Epilepsy; Artificial intelligence; Explainable AI; brain tumor; deep learning; epilepsy; explainable artificial intelligence; feature extraction;
D O I
10.1109/ACCESS.2024.3375877
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The lack of interpretability and transparency in deep learning architectures has raised concerns among professionals in various industries and academia. One of the main concerns is the ability to trust these architectures' without being provided any insight into the decision-making process. Despite these concerns, researchers continue to explore new models and architectures that do not incorporate explainability into their main construct. In the medical industry, it is crucial to provide explanations of any decision, as patient health outcomes can vary according to decisions made. Furthermore, in medical research, incorrectly diagnosed neurological conditions are a high-cost error that contributes significantly to morbidity and mortality. Therefore, the development of new transparent techniques for neurological conditions is critical. This paper presents a novel Autonomous Relevance Technique for an Explainable neurological disease prediction framework called ART-Explain. The proposed technique autonomously extracts features from within the deep learning architecture to create novel visual explanations of the resulting prediction. ART-Explain is an end-to-end autonomous explainable technique designed to present an intuitive and holistic overview of a prediction made by a deep learning classifier. To evaluate the effectiveness of our approach, we benchmark it with other state-of-the-art techniques using three data sets of neurological disorders. The results demonstrate the generalisation capabilities of our technique and its suitability for real-world applications. By providing transparent insights into the decision-making process, ART-Explain can improve end-user trust and enable a better understanding of classification outcomes in the detection of neurological diseases.
引用
收藏
页码:37731 / 37743
页数:13
相关论文
共 38 条
  • [1] Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals
    Acharya, U. Rajendra
    Oh, Shu Lih
    Hagiwara, Yuki
    Tan, Jen Hong
    Adeli, Hojjat
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2018, 100 : 270 - 278
  • [2] Alzheimer's Disease Neuroimaging Initiative, US
  • [3] Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state
    Andrzejak, RG
    Lehnertz, K
    Mormann, F
    Rieke, C
    David, P
    Elger, CE
    [J]. PHYSICAL REVIEW E, 2001, 64 (06): : 8 - 061907
  • [4] Australian Institute of Health Welfare, Epilepsy in Australia
  • [5] Australian Institute of Health Welfare, Dementia in Australia
  • [6] Bhuvaji S., Brain Tumor Classification (MRI)
  • [7] Cancer Australia, Brain Cancer in Australia Statistics
  • [8] Deep learning for neurodegenerative disorder (2016 to 2022): A systematic review
    Chaki, Jyotismita
    Wozniak, Marcin
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 80
  • [9] Cheng J, 2017, FIGSHARE, DOI DOI 10.6084/M9.%20FIGSHARE.1512427.V5
  • [10] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807