Jump motion intention recognition and brain activity analysis based on EEG signals and Vision Transformer model

被引:3
|
作者
Lu, Yanzheng [1 ,2 ]
Wang, Hong [3 ]
Niu, Jianye [1 ,2 ]
Lu, Zhiguo [3 ]
Liu, Chong [3 ]
Feng, Naishi [3 ]
机构
[1] Yanshan Univ, Parallel Robot & Mechatron Syst Lab Hebei Prov, Qinhuangdao 066000, Peoples R China
[2] Yanshan Univ, Sch Mech Engn, Qinhuangdao 066000, Peoples R China
[3] Northeastern Univ, Sch Mech Engn & Automat, Shenyang 110819, Peoples R China
关键词
Brain-computer interface; Electroencephalography; Motor execution; Sensorimotor rhythm; Self-attention architecture; Source localization; CORTICAL POTENTIALS; INITIATION;
D O I
10.1016/j.bspc.2024.107001
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The lower limb exoskeleton can help the human jump movement and expand jump ability. However, recognizing jump motion intention promptly and accurately is a challenge. The electroencephalography (EEG) signal has the potential to recognize jump motion early due to accompanying motor intention generation. This paper proposes a method based on Vision Transformer to extract spatial-spectral-temporal information from multi-channel EEG signals to recognize jump motion intentions including pre-jump, off-ground, and post-jump. Artifacts of EEG signals are removed by filtering and independent component analysis. The movement-related cortical potential, time-frequency, source localization, and functional connectivity analyses are performed to analyze the changes in brain activity during jump, which can explore motion control mechanism of the brain and provide physiological basis for the construction of the recognition method. The EEG features in time domain and frequency domain are reduced dimensionally by channel, feature, and frequency band selections. The proposed model based on multi-head self-attention architecture extracts spatial information from sequence composed of multi-channel temporal and spectral fusion features. The recognition performance of the proposed model outperforms those of comparison models, and the average recognition accuracy and kappa coefficient are 88.797% and 0.8325, respectively. The classification performance of temporal and spectral fusion features is higher than that of temporal features, especially spectral features. The brain networks constructed by attention coefficients focus on the connectivity differences among tasks and cannot reflect the physiological significance of EEG signals. Finally, the proposed method is validated on the open access BCI competition IV Dataset 2a of EEG signals.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Task Based Motion Intention Prediction with EEG Signals
    Bandara, D. S. V.
    Arata, Jumpei
    Kigichi, Kazuo
    2016 IEEE INTERNATIONAL SYMPOSIUM ON ROBOTICS AND INTELLIGENT SENSORS (IRIS), 2016, : 57 - 60
  • [2] The Recognition of Motion Intention of Knee Joint Based on Piezoelectric Signals
    Zhu, Benben
    Wan, Zhou
    Xu, Yi
    2018 INTERNATIONAL SEMINAR ON COMPUTER SCIENCE AND ENGINEERING TECHNOLOGY (SCSET 2018), 2019, 1176
  • [3] Transformer-based fusion model for mild depression recognition with EEG and pupil area signals
    Zhu, Jing
    Li, Yuanlong
    Yang, Changlin
    Cai, Hanshu
    Li, Xiaowei
    Hu, Bin
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2025,
  • [4] Classification and recognition of gesture EEG signals with Transformer-Based models
    Qu, Yan
    Li, Congsheng
    Jiang, Haoyu
    2024 3RD INTERNATIONAL CONFERENCE ON ROBOTICS, ARTIFICIAL INTELLIGENCE AND INTELLIGENT CONTROL, RAIIC 2024, 2024, : 415 - 418
  • [5] A human activity recognition method based on Vision Transformer
    Han, Huiyan
    Zeng, Hongwei
    Kuang, Liqun
    Han, Xie
    Xue, Hongxin
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [6] Concept of Brain-Controlled Exoskeleton Based on Motion Tracking and EEG Signals Analysis
    Olczak, Andrzej
    BIOMEDICAL ENGINEERING AND NEUROSCIENCE, 2018, 720 : 141 - 149
  • [7] Residual Learning Attention CNN for Motion Intention Recognition Based on EEG Data
    Wang, Ting
    Mao, Jingna
    Xiao, Ruozhou
    Wang, Wuqi
    Ding, Guangxin
    Zhang, Zhiwei
    2021 IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE (IEEE BIOCAS 2021), 2021,
  • [8] Hybrid feature integration model and adaptive transformer approach for emotion recognition with EEG signals
    Reddy, C. H. Narsimha
    Mahesh, Shanthi
    Manjunathachari, K.
    COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING, 2024, 27 (12) : 1610 - 1632
  • [9] A Transformer Based Emotion Recognition Model for Social Robots Using Topographical Maps Generated from EEG Signals
    Bethany, Gosala
    Gupta, Manjari
    HUMAN-COMPUTER INTERACTION, PT I, HCI 2024, 2024, 14684 : 262 - 271
  • [10] Analysis and intention recognition of motor imagery EEG signals based on multi-feature convolutional neural network
    He Q.
    Shao D.
    Wang Y.
    Zhang Y.
    Xie P.
    Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2020, 41 (01): : 138 - 146