A Cross-Scale Transformer and Triple-View Attention Based Domain-Rectified Transfer Learning for EEG Classification in RSVP Tasks

被引:5
|
作者
Luo, Jie [1 ]
Cui, Weigang [2 ]
Xu, Song [3 ]
Wang, Lina [4 ]
Chen, Huiling [4 ]
Li, Yang [5 ,6 ]
机构
[1] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing 100191, Peoples R China
[2] Beihang Univ, Sch Engn Med, Beijing 100191, Peoples R China
[3] Beijing Aerosp Automat Control Inst, Natl Key Lab Sci & Technol Aerosp Intelligence Con, Beijing 100070, Peoples R China
[4] Wenzhou Univ, Coll Comp Sci & Artificial Intelligence, Wenzhou 325035, Peoples R China
[5] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing 100083, Peoples R China
[6] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
关键词
Brain-computer interface; EEG; RSVP; transformer; transfer learning; NEURAL-NETWORK; MODEL;
D O I
10.1109/TNSRE.2024.3359191
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Rapid serial visual presentation (RSVP)-based brain-computer interface (BCI) is a promising target detection technique by using electroencephalogram (EEG) signals. However, existing deep learning approaches seldom considered dependencies of multi-scale temporal features and discriminative multi-view spectral features simultaneously, which limits the representation learning ability of the model and undermine the EEG classification performance. In addition, recent transfer learning-based methods generally failed to obtain transferable cross-subject invariant representations and commonly ignore the individual-specific information, leading to the poor cross-subject transfer performance. In response to these limitations, we propose a cross-scale Transformer and triple-view attention based domain-rectified transfer learning (CST-TVA-DRTL) for the RSVP classification. Specially, we first develop a cross-scale Transformer (CST) to extract multi-scale temporal features and exploit the dependencies of different scales features. Then, a triple-view attention (TVA) is designed to capture spectral features from triple views of multi-channel time-frequency images. Finally, a domain-rectified transfer learning (DRTL) framework is proposed to simultaneously obtain transferable domain-invariant representations and untransferable domain-specific representations, then utilize domain-specific information to rectify domain-invariant representations to adapt to target data. Experimental results on two public RSVP datasets suggests that our CST-TVA-DRTL outperforms the state-of-the-art methods in the RSVP classification task.
引用
收藏
页码:672 / 683
页数:12
相关论文
共 1 条
  • [1] Cross-domain knowledge transfer based parallel-cascaded multi-scale attention network for limited view reconstruction in projection magnetic particle imaging
    Wu, Xiangjun
    Gao, Pengli
    Zhang, Peng
    Shang, Yaxin
    He, Bingxi
    Zhang, Liwen
    Jiang, Jingying
    Hui, Hui
    Tian, Jie
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 158