Multi-modal Decoding of Reach-to-Grasping from EEG and EMG via Neural Networks

被引:0
|
作者
Borra, Davide [1 ]
Fraternali, Matteo [1 ]
Ravanelli, Mirco [2 ,3 ]
Magosso, Elisa [1 ]
机构
[1] Univ Bologna, Dept Elect Elect & Informat Engn Guglielmo Marcon, Cesena Campus, Cesena, Italy
[2] Concordia Univ, Dept Comp Sci & Software Engn, Montreal, PQ, Canada
[3] Mila Quebec AI Inst, Montreal, PQ, Canada
来源
ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, ANNPR 2024 | 2024年 / 15154卷
关键词
EEG; EMG; Multi-modal motor decoding; Reach-to-grasping; Convolutional neural networks; Brain-Computer Interfaces;
D O I
10.1007/978-3-031-71602-7_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have revolutionized motor decoding from electroencephalographic (EEG) signals, showcasing their ability to outperform traditional machine learning, especially for Brain-Computer Interface (BCI) applications. By processing also other recording modalities (e.g., electromyography, EMG) together with EEG signals, motor decoding improved. However, multi-modal algorithms for decoding hand movements are mainly applied to simple movements (e.g., wrist flexion/extension), while their adoption for decoding complex movements (e.g., different grip types) is still under-investigated. In this study, we recorded EEG and EMG signals from 12 participants while they performed a delayed reach-to-grasping task towards one out of four possible objects (a handle, a pin, a card, and a ball), and we addressed multi-modal EEG+EMG decoding with a dual-branch CNN. Each branch of the CNN was based on EEGNet. The performance of the multi-modal approach was compared to mono-modal baselines (based on EEG or EMG only). The multi-modal EEG+EMG pipeline outperformed the EEG-based pipeline during movement initiation, while it outperformed the EMG-based pipeline in motor preparation. Finally, the multi-modal approach was capable of accurately discriminating between grip types widely during the task, especially from movement initiation. Our results further validate multi-modal decoding for potential future BCI applications, aiming at achieving a more natural user experience.
引用
收藏
页码:168 / 179
页数:12
相关论文
共 34 条
  • [21] Activity Recognition from Multi-modal Sensor Data Using a Deep Convolutional Neural Network
    Taherkhani, Aboozar
    Cosma, Georgina
    Alani, Ali A.
    McGinnity, T. M.
    INTELLIGENT COMPUTING, VOL 2, 2019, 857 : 203 - 218
  • [22] Characterisation of motor deficits and prediction of therapeutic success via motion analysis and movement-related potentials (multi-modal EEG)
    Platz, T
    KLINISCHE NEUROPHYSIOLOGIE, 2002, 33 (02) : 106 - 116
  • [23] Decoding study-independent mind-wandering from EEG using convolutional neural networks
    Jin, Christina Yi
    Borst, Jelmer P.
    van Vugt, Marieke K.
    JOURNAL OF NEURAL ENGINEERING, 2023, 20 (02)
  • [24] Decoding Color Visual Working Memory from EEG Signals Using Graph Convolutional Neural Networks
    Che, Xiaowei
    Zheng, Yuanjie
    Chen, Xin
    Song, Sutao
    Li, Shouxin
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2022, 32 (02)
  • [25] Latent Factor Decoding of Multi-Channel EEG for Emotion Recognition Through Autoencoder-Like Neural Networks
    Li, Xiang
    Zhao, Zhigang
    Song, Dawei
    Zhang, Yazhou
    Pan, Jingshan
    Wu, Lu
    Huo, Jidong
    Mu, Chunyang
    Wang, Di
    FRONTIERS IN NEUROSCIENCE, 2020, 14
  • [26] Dual-modal and multi-scale deep neural networks for sleep staging using EEG and ECG signals
    Zhao, Ranqi
    Xia, Yi
    Wang, Qiuyang
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 66
  • [27] Decoding sensorimotor information from superior parietal lobule of macaque via Convolutional Neural Networks
    Filippini, Matteo
    Borra, Davide
    Ursino, Mauro
    Magosso, Elisa
    Fattori, Patrizia
    NEURAL NETWORKS, 2022, 151 : 276 - 294
  • [28] Emotion recognition from spatiotemporal EEG representations with hybrid convolutional recurrent neural networks via wearable multi-channel headset
    Chen, Jingxia
    Jiang, Dongmei
    Zhang, Yanning
    Zhang, Pengwei
    COMPUTER COMMUNICATIONS, 2020, 154 : 58 - 65
  • [29] Decoding kinetic features of hand motor preparation from single-trial EEG using convolutional neural networks
    Gatti, Ramiro
    Atum, Yanina
    Schiaffino, Luciano
    Jochumsen, Mads
    Biurrun Manresa, Jose
    EUROPEAN JOURNAL OF NEUROSCIENCE, 2021, 53 (02) : 556 - 570
  • [30] TOWARDS DECODING SELECTIVE ATTENTION FROM SINGLE-TRIAL EEG DATA IN COCHLEAR IMPLANT USERS BASED ON DEEP NEURAL NETWORKS
    Nogueira, Waldo
    Dolhopiatenko, Hanna
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8708 - 8712