A revised Inception-ResNet model-based transfer learning for cross-subject decoding of fNIRS-BCI

被引:1
作者
Zhang, Yao [1 ]
Gao, Feng [1 ,2 ]
机构
[1] Tianjin Univ, Sch Precis Instrument & Optoelect Engn, Tianjin 300072, Peoples R China
[2] Tianjin Key Lab Biomed Detecting Tech & Instrumen, Tianjin 300072, Peoples R China
来源
OPTICS IN HEALTH CARE AND BIOMEDICAL OPTICS XIII | 2023年 / 12770卷
基金
中国国家自然科学基金;
关键词
revised Inception-ResNet model; deep transfer learning; cross-subject decoding; functional near-infrared spectroscopy; brain-computer interface;
D O I
10.1117/12.2689271
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
An extended calibration procedure is required to collect sufficient data for establishing a stable and reliable subject-specific classifier before the user can use a brain-computer interface (BCI) system based on functional near-infrared spectroscopy (fNIRS). In addition, individual differences and data collection conditions can lead to low generalization performance of the subject-specific classifier cross-subject. To address the above dilemma and improve the versatility of fNIRS-BCI system applications, we propose a revised Inception-ResNet (rIRN) model-based transfer learning (TL) to improve the cross-subject decoding accuracy of mental tasks. The TL-rIRN is a deep transfer learning model that combines an elaborated rIRN model for fNIRS signal classification with model-based transfer learning. The rIRN model is pre-trained using source domain fNIRS data from multi-subjects to extract the common features of neural activation among subjects. The pre-trained model is then fine-tuned using a small amount of calibration data from the target subject to obtain a fine-tuned transfer model that can accurately classify the new data from the target subject. The fNIRS data of eight participants are collected for the purpose of distinguishing between mental arithmetic and mental singing tasks. The leave-one-subject-out cross-validation method is used to evaluate the cross-subject decoding performance of TL-rIRN. The results show that TL-rIRN improves cross-subject decoding accuracy and effectively reduces model training time, calibration time, and computational resources.
引用
收藏
页数:8
相关论文
共 13 条
[1]   Differential pathlength factor in continuous wave functional near-infrared spectroscopy: reducing hemoglobin's cross talk in high-density recordings [J].
Chiarelli, Antonio Maria ;
Perpetuini, David ;
Filippini, Chiara ;
Cardone, Daniela ;
Merla, Arcangelo .
NEUROPHOTONICS, 2019, 6 (03)
[2]   Deep learning in fNIRS: a review [J].
Eastmond, Condell ;
Subedi, Aseem ;
De, Suvranu ;
Intes, Xavier .
NEUROPHOTONICS, 2022, 9 (04)
[3]   Subject-independent mental state classification in single trials [J].
Fazli, Siamac ;
Popescu, Florin ;
Danoczy, Marton ;
Blankertz, Benjamin ;
Mueller, Klaus-Robert ;
Grozea, Cristian .
NEURAL NETWORKS, 2009, 22 (09) :1305-1312
[4]   Deep recurrent-convolutional neural network for classification of simultaneous EEG-fNIRS signals [J].
Ghonchi, Hamidreza ;
Fateh, Mansoor ;
Abolghasemi, Vahid ;
Ferdowsi, Saideh ;
Rezvani, Mohsen .
IET SIGNAL PROCESSING, 2020, 14 (03) :142-153
[5]   Toward more intuitive brain-computer interfacing: classification of binary covert intentions using functional near-infrared spectroscopy [J].
Hwang, Han-Jeong ;
Choi, Han ;
Kim, Jeong-Youn ;
Chang, Won-Du ;
Kim, Do-Won ;
Kim, Kiwoong ;
Jo, Sungho ;
Im, Chang-Hwan .
JOURNAL OF BIOMEDICAL OPTICS, 2016, 21 (09)
[6]   Novel fNIRS study on homogeneous symmetric feature-based transfer learning for brain-computer interface [J].
Khalil, Khurram ;
Asgher, Umer ;
Ayaz, Yasar .
SCIENTIFIC REPORTS, 2022, 12 (01)
[7]   Subject-Independent Functional Near-Infrared Spectroscopy-Based Brain-Computer Interfaces Based on Convolutional Neural Networks [J].
Kwon, Jinuk ;
Im, Chang-Hwan .
FRONTIERS IN HUMAN NEUROSCIENCE, 2021, 15
[8]   Subject-Independent Brain-Computer Interfaces Based on Deep Convolutional Neural Networks [J].
Kwon, O-Yeon ;
Lee, Min-Ho ;
Guan, Cuntai ;
Lee, Seong-Whan .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) :3839-3852
[9]  
Ma Tengfei, 2021, J Neural Eng, V18, DOI [10.1088/1741-2552/abf187, 10.1088/1741-2552/abf187]
[10]   Transferring and generalizing deep-learning-based neural encoding models across subjects [J].
Wen, Haiguang ;
Shi, Junxing ;
Chen, Wei ;
Liu, Zhongming .
NEUROIMAGE, 2018, 176 :152-163