Multi-modal Decoding of Reach-to-Grasping from EEG and EMG via Neural Networks

被引:0
|
作者
Borra, Davide [1 ]
Fraternali, Matteo [1 ]
Ravanelli, Mirco [2 ,3 ]
Magosso, Elisa [1 ]
机构
[1] Univ Bologna, Dept Elect Elect & Informat Engn Guglielmo Marcon, Cesena Campus, Cesena, Italy
[2] Concordia Univ, Dept Comp Sci & Software Engn, Montreal, PQ, Canada
[3] Mila Quebec AI Inst, Montreal, PQ, Canada
来源
ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, ANNPR 2024 | 2024年 / 15154卷
关键词
EEG; EMG; Multi-modal motor decoding; Reach-to-grasping; Convolutional neural networks; Brain-Computer Interfaces;
D O I
10.1007/978-3-031-71602-7_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have revolutionized motor decoding from electroencephalographic (EEG) signals, showcasing their ability to outperform traditional machine learning, especially for Brain-Computer Interface (BCI) applications. By processing also other recording modalities (e.g., electromyography, EMG) together with EEG signals, motor decoding improved. However, multi-modal algorithms for decoding hand movements are mainly applied to simple movements (e.g., wrist flexion/extension), while their adoption for decoding complex movements (e.g., different grip types) is still under-investigated. In this study, we recorded EEG and EMG signals from 12 participants while they performed a delayed reach-to-grasping task towards one out of four possible objects (a handle, a pin, a card, and a ball), and we addressed multi-modal EEG+EMG decoding with a dual-branch CNN. Each branch of the CNN was based on EEGNet. The performance of the multi-modal approach was compared to mono-modal baselines (based on EEG or EMG only). The multi-modal EEG+EMG pipeline outperformed the EEG-based pipeline during movement initiation, while it outperformed the EMG-based pipeline in motor preparation. Finally, the multi-modal approach was capable of accurately discriminating between grip types widely during the task, especially from movement initiation. Our results further validate multi-modal decoding for potential future BCI applications, aiming at achieving a more natural user experience.
引用
收藏
页码:168 / 179
页数:12
相关论文
共 34 条
  • [1] Simultaneous Scalp Electroencephalography (EEG), Electromyography (EMG), and Whole-body Segmental Inertial Recording for Multi-modal Neural Decoding
    Bulea, Thomas C.
    Kilicarslan, Atilla
    Ozdemir, Recep
    Paloski, William H.
    Contreras-Vidal, Jose L.
    JOVE-JOURNAL OF VISUALIZED EXPERIMENTS, 2013, (77):
  • [2] Decoding EEG Brain Activity for Multi-Modal Natural Language Processing
    Hollenstein, Nora
    Renggli, Cedric
    Glaus, Benjamin
    Barrett, Maria
    Troendle, Marius
    Langer, Nicolas
    Zhang, Ce
    FRONTIERS IN HUMAN NEUROSCIENCE, 2021, 15
  • [3] Multi-Modal Convolutional Neural Networks for Activity Recognition
    Ha, Sojeong
    Yun, Jeong-Min
    Choi, Seungjin
    2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS, 2015, : 3017 - 3022
  • [4] MMH-GGCNN: Multi-Modal Hierarchical Generative Grasping Convolutional Neural Network
    Lee, Sun-Kyung
    Myung, Hyun
    Kim, Jong-Hwan
    ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS 6, 2022, 429 : 422 - 430
  • [5] Two-Step Registration on Multi-Modal Retinal Images via Deep Neural Networks
    Zhang, Junkang
    Wang, Yiqian
    Dai, Ji
    Cavichini, Melina
    Bartsch, Dirk-Uwe G.
    Freeman, William R.
    Nguyen, Truong Q.
    An, Cheolhong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 823 - 838
  • [6] Multi-Modal Reflection Removal Using Convolutional Neural Networks
    Sun, Jun
    Chang, Yakun
    Jung, Cheolkon
    Feng, Jiawei
    IEEE SIGNAL PROCESSING LETTERS, 2019, 26 (07) : 1011 - 1015
  • [7] EEG Motor Execution Decoding via Interpretable Sinc-Convolutional Neural Networks
    Borra, Davide
    Fantozzi, Silvia
    Magosso, Elisa
    XV MEDITERRANEAN CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING AND COMPUTING - MEDICON 2019, 2020, 76 : 1113 - 1122
  • [8] Air Pollution Prediction with Multi-Modal Data and Deep Neural Networks
    Kalajdjieski, Jovan
    Zdravevski, Eftim
    Corizzo, Roberto
    Lameski, Petre
    Kalajdziski, Slobodan
    Pires, Ivan Miguel
    Garcia, Nuno M.
    Trajkovik, Vladimir
    REMOTE SENSING, 2020, 12 (24) : 1 - 19
  • [9] CognitiveWorkload Assessment via Eye Gaze and EEG in an Interactive Multi-Modal Driving Task
    Aygun, Ayca
    Lyu, Boyang
    Thuan Nguyen
    Haga, Zachary
    Aeron, Shuchin
    Scheutz, Matthias
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2022, 2022, : 337 - 348
  • [10] Multi-modal facial expression feature based on deep-neural networks
    Wei Wei
    Qingxuan Jia
    Yongli Feng
    Gang Chen
    Ming Chu
    Journal on Multimodal User Interfaces, 2020, 14 : 17 - 23