Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification

被引:0
作者
Tapia, Carlos Gomez [1 ]
Bozic, Bojan [1 ]
Longo, Luca [1 ]
机构
[1] Technol Univ Dublin, Appl Intelligence Res Ctr, Sch Comp Sci, Artificial Intelligence & Cognit Load Res Lab, Dublin, Ireland
来源
EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT III | 2023年 / 1903卷
关键词
Electroencephalography; eXplainable Artificial Intelligence; Deep Learning; Signal processing; attribution xAI methods; Graph-Neural Network; Biometrics; signal-to-noise ratio; ELECTROENCEPHALOGRAM; AUTHENTICATION; CLASSIFICATION; ARTIFACT;
D O I
10.1007/978-3-031-44070-0_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Electroencephalography (EEG) data has emerged as a promising modality for biometric applications, offering unique and secure personal identification and authentication methods. This research comprehensively compared EEG data pre-processing techniques, focusing on biometric applications. In tandem with this, the study illuminates the pivotal role of Explainable Artificial Intelligence (XAI) in enhancing the transparency and interpretability of machine learning models. Notably, integrating XAI methodologies contributes significantly to the evolution of more precise, reliable, and ethically sound machine learning systems. An outstanding test accuracy exceeding 99% was observed within the biometric system, corroborating the Graph Neural Network (GNN) model's ability to distinguish between individuals. However: high accuracy does not unequivocally signify that models have extracted meaningful features from the EEG data. Despite impressive test accuracy, a fundamental need remains for an in-depth comprehension of the models. Attributions proffer initial insights into the decision-making process. Still, they did not allow us to determine why specific channels are more contributory than others and whether the models have discerned genuine cognitive processing discrepancies. Nevertheless, deploying explainability techniques has amplified system-wide interpretability and revealed that models learned to identify noise patterns to distinguish between individuals. Applying XAI techniques and fostering interdisciplinary partnerships that blend the domain expertise from neuroscience and machine learning is necessary to interpret attributions further and illuminate the models' decision-making processes.
引用
收藏
页码:131 / 152
页数:22
相关论文
共 39 条
[31]  
Tong S., 2009, QUANTITATIVE EEG ANA
[32]  
Vilone G, 2020, Arxiv, DOI [arXiv:2006.00093, 10.48550/arXiv:2006.00093]
[33]   Classification of Explainable Artificial Intelligence Methods through Their Output Formats [J].
Vilone, Giulia ;
Longo, Luca .
MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2021, 3 (03) :615-661
[34]   Notions of explainability and evaluation approaches for explainable artificial intelligence [J].
Vilone, Giulia ;
Longo, Luca .
INFORMATION FUSION, 2021, 76 :89-106
[35]   GENETIC BASIS OF NORMAL HUMAN ELECTROENCEPHALOGRAM (EEG) [J].
VOGEL, F .
HUMANGENETIK, 1970, 10 (02) :91-&
[36]   BrainPrint: EEG biometric identification based on analyzing brain connectivity graphs [J].
Wang, Min ;
Hu, Jiankun ;
Abbass, Hussein A. .
PATTERN RECOGNITION, 2020, 105
[37]   THE REMOVAL OF THE EYE-MOVEMENT ARTIFACT FROM THE EEG BY REGRESSION-ANALYSIS IN THE FREQUENCY-DOMAIN [J].
WOESTENBURG, JC ;
VERBATEN, MN ;
SLANGEN, JL .
BIOLOGICAL PSYCHOLOGY, 1983, 16 (1-2) :127-147
[38]  
Yao L, 2019, AAAI CONF ARTIF INTE, P7370
[39]   EEG-Based Emotion Recognition Using Regularized Graph Neural Networks [J].
Zhong, Peixiang ;
Wang, Di ;
Miao, Chunyan .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (03) :1290-1301