An explainable and interpretable model for attention deficit hyperactivity disorder in children using EEG signals

被引:46
作者
Khare, Smith K. [1 ]
Acharya, U. Rajendra [2 ,3 ,4 ,5 ,6 ]
机构
[1] Aarhus Univ, Elect & Comp Engn Dept, DK-8200 Aarhus, Denmark
[2] Univ Southern Queensland, Sch Math Phys & Comp, Springfield, Australia
[3] Singapore Univ Social Sci, Sch Sci & Technol, Dept Biomed Engn, Singapore, Singapore
[4] Asia Univ, Dept Biomed Informat & Med Engn, Taichung, Taiwan
[5] Kumamoto Univ, Kumamoto, Japan
[6] Univ Malaya, Kuala Lumpur, Malaysia
关键词
Attention deficit hyperactivity disorder; Electroencephalography; Variational mode decomposition; Explainable machine learning; Interpretable machine learning; ADHD; DIAGNOSIS; DECOMPOSITION; PREVALENCE; FEATURES;
D O I
10.1016/j.compbiomed.2023.106676
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Background: Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder that affects a person's sleep, mood, anxiety, and learning. Early diagnosis and timely medication can help individuals with ADHD perform daily tasks without difficulty. Electroencephalogram (EEG) signals can help neurologists to detect ADHD by examining the changes occurring in it. The EEG signals are complex, non-linear, and non -stationary. It is difficult to find the subtle differences between ADHD and healthy control EEG signals visually. Also, making decisions from existing machine learning (ML) models do not guarantee similar performance (unreliable). Method: The paper explores a combination of variational mode decomposition (VMD), and Hilbert transform (HT) called VMD-HT to extract hidden information from EEG signals. Forty-one statistical parameters extracted from the absolute value of analytical mode functions (AMF) have been classified using the explainable boosted machine (EBM) model. The interpretability of the model is tested using statistical analysis and performance measurement. The importance of the features, channels and brain regions has been identified using the glass -box and black-box approach. The model's local and global explainability has been visualized using Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), Partial Dependence Plot (PDP), and Morris sensitivity. To the best of our knowledge, this is the first work that explores the explainability of the model prediction in ADHD detection, particularly for children. Results: Our results show that the explainable model has provided an accuracy of 99.81%, a sensitivity of 99.78%, 99.84% specificity, an F-1 measure of 99.83%, the precision of 99.87%, a false detection rate of 0.13%, and Mathew's correlation coefficient, negative predicted value, and critical success index of 99.61%, 99.73%, and 99.66%, respectively in detecting the ADHD automatically with ten-fold cross-validation. The model has provided an area under the curve of 100% while the detection rate of 99.87% and 99.73% has been obtained for ADHD and HC, respectively. Conclusions: The model show that the interpretability and explainability of frontal region is highest compared to pre-frontal, central, parietal, occipital, and temporal regions. Our findings has provided important insight into the developed model which is highly reliable, robust, interpretable, and explainable for the clinicians to detect ADHD in children. Early and rapid ADHD diagnosis using robust explainable technologies may reduce the cost of treatment and lessen the number of patients undergoing lengthy diagnosis procedures.
引用
收藏
页数:16
相关论文
共 67 条
  • [21] Entropies for detection of epilepsy in EEG
    Kannathal, N
    Choo, ML
    Acharya, UR
    Sadasivan, PK
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2005, 80 (03) : 187 - 194
  • [22] Kaur S, 2018, PROCEEDINGS OF 2018 IEEE APPLIED SIGNAL PROCESSING CONFERENCE (ASPCON), P87, DOI 10.1109/ASPCON.2018.8748412
  • [23] Optimized Tunable Q Wavelet Transform Based Drowsiness Detection from Electroencephalogram Signals
    Khare, S. K.
    Bajaj, V
    [J]. IRBM, 2022, 43 (01) : 13 - 21
  • [24] Khare S.K., 2021, 2021 INT C CONTROL A, P1, DOI [10.1109/CAPS52117.2021.9730723, DOI 10.1109/CAPS52117.2021.9730723]
  • [25] VHERS: A Novel Variational Mode Decomposition and Hilbert Transform-Based EEG Rhythm Separation for Automatic ADHD Detection
    Khare, Smith K.
    Gaikwad, Nikhil B.
    Bajaj, Varun
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [26] A hybrid decision support system for automatic detection of Schizophrenia using EEG signals
    Khare, Smith K.
    Bajaj, Varun
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 141
  • [27] A self-learned decomposition and classification model for schizophrenia diagnosis
    Khare, Smith K.
    Bajaj, Varun
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2021, 211
  • [28] Entropy-Based Drowsiness Detection Using Adaptive Variational Mode Decomposition
    Khare, Smith K.
    Bajaj, Varun
    [J]. IEEE SENSORS JOURNAL, 2021, 21 (05) : 6421 - 6428
  • [29] An Evolutionary Optimized Variational Mode Decomposition for Emotion Recognition
    Khare, Smith K.
    Bajaj, Varun
    [J]. IEEE SENSORS JOURNAL, 2021, 21 (02) : 2035 - 2042
  • [30] A facile and flexible motor imagery classification using electroencephalogram signals
    Khare, Smith K.
    Bajaj, Varun
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2020, 197