共 1 条
I see artifacts: ICA-based EEG artifact removal does not improve deep network decoding across three BCI tasks
被引:0
|作者:
Kang, Taeho
[1
]
Chen, Yiyu
[2
]
Wallraven, Christian
[2
,3
]
机构:
[1] Tech Univ Wien, Inst Management Sci, Vienna, Austria
[2] Korea Univ, Dept Dept Artificial Intelligence, Seoul, South Korea
[3] Korea Univ, Dept Brain & Cognit Engn, Seoul, South Korea
基金:
奥地利科学基金会;
新加坡国家研究基金会;
关键词:
EEG;
artifact rejection;
independent component analysis;
automated data pre-processing;
deep learning;
neural networks;
brain-computer interfaces;
INDEPENDENT COMPONENT ANALYSIS;
BLIND SOURCE SEPARATION;
MOTOR IMAGERY;
INFOMAX ALGORITHM;
OCULAR ARTIFACTS;
SIGNAL;
CLASSIFICATION;
NOISE;
MEG;
TOOL;
D O I:
10.1088/1741-2552/ad788e
中图分类号:
R318 [生物医学工程];
学科分类号:
0831 ;
摘要:
Objective. In this paper, we conduct a detailed investigation on the effect of independent component (IC)-based noise rejection methods in neural network classifier-based decoding of electroencephalography (EEG) data in different task datasets. Approach. We apply a pipeline matrix of two popular different independent component (IC) decomposition methods (Infomax and Adaptive Mixture Independent Component Analysis (AMICA)) with three different component rejection strategies (none, ICLabel, and multiple artifact rejection algorithm [MARA]) on three different EEG datasets (motor imagery, long-term memory formation, and visual memory). We cross-validate processed data from each pipeline with three architectures commonly used for EEG classification (two convolutional neural networks and one long short-term memory-based model. We compare decoding performances on within-participant and within-dataset levels. Main Results. Our results show that the benefit from using IC-based noise rejection for decoding analyses is at best minor, as component-rejected data did not show consistently better performance than data without rejections-especially given the significant computational resources required for independent component analysis (ICA) computations. Significance. With ever-growing emphasis on transparency and reproducibility, as well as the obvious benefits arising from streamlined processing of large-scale datasets, there has been an increased interest in automated methods for pre-processing EEG data. One prominent part of such pre-processing pipelines consists of identifying and potentially removing artifacts arising from extraneous sources. This is typically done via IC-based correction for which numerous methods have been proposed, differing not only in the decomposition of the raw data into ICs, but also in how they reject the computed ICs. While the benefits of these methods are well established in univariate statistical analyses, it is unclear whether they help in multivariate scenarios, and specifically in neural network-based decoding studies. As computational costs for pre-processing large-scale datasets are considerable, it is important to consider whether the trade-off between model performance and available resources is worth the effort.
引用
收藏
页数:23
相关论文