Decoding kinetic features of hand motor preparation from single-trial EEG using convolutional neural networks

被引:7
|
作者
Gatti, Ramiro [1 ,2 ]
Atum, Yanina [2 ]
Schiaffino, Luciano [2 ]
Jochumsen, Mads [3 ]
Biurrun Manresa, Jose [1 ,2 ,4 ]
机构
[1] UNER, CONICET, Inst Res & Dev Bioengn & Bioinformat IBB, Route 11 Km 10, Oro Verde, Entre Rios, Argentina
[2] Natl Univ Entre Rios, Fac Engn, Lab Rehabil Engn & Neuromuscular & Sensor Res LIR, Oro Verde, Argentina
[3] Aalborg Univ, Ctr Sensory Motor Interact SMI, Aalborg, Denmark
[4] Aalborg Univ, Ctr Neuroplast & Pain CNAP, Aalborg, Denmark
基金
新加坡国家研究基金会;
关键词
brain computer interface; deep learning; movement prediction; multi-class classification; neural engineering; MOVEMENT-RELATED POTENTIALS; BRAIN-COMPUTER INTERFACES; CLASSIFICATION; SELECTION; INTENTION;
D O I
10.1111/ejn.14936
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Building accurate movement decoding models from brain signals is crucial for many biomedical applications. Predicting specific movement features, such as speed and force, before movement execution may provide additional useful information at the expense of increasing the complexity of the decoding problem. Recent attempts to predict movement speed and force from the electroencephalogram (EEG) achieved classification accuracies at or slightly above chance levels, highlighting the need for more accurate prediction strategies. Thus, the aims of this study were to accurately predict hand movement speed and force from single-trial EEG signals and to decode neurophysiological information of motor preparation from the prediction strategies. To these ends, a decoding model based on convolutional neural networks (ConvNets) was implemented and compared against other state-of-the-art prediction strategies, such as support vector machines and decision trees. ConvNets outperformed the other prediction strategies, achieving an overall accuracy of 84% in the classification of two different levels of speed and force (four-class classification) from pre-movement single-trial EEG (100 ms and up to 1,600 ms prior to movement execution). Furthermore, an analysis of the ConvNet architectures suggests that the network performs a complex spatiotemporal integration of EEG data to optimize classification accuracy. These results show that movement speed and force can be accurately predicted from single-trial EEG, and that the prediction strategies may provide useful neurophysiological information about motor preparation.
引用
收藏
页码:556 / 570
页数:15
相关论文
共 50 条
  • [41] Recognition of single upper limb motor imagery tasks from EEG using multi-branch fusion convolutional neural network
    Zhang, Rui
    Chen, Yadi
    Xu, Zongxin
    Zhang, Lipeng
    Hu, Yuxia
    Chen, Mingming
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [42] Detection of Movement-Related Brain Activity Associated with Hand and Tongue Movements from Single-Trial Around-Ear EEG
    Gulyas, David
    Jochumsen, Mads
    SENSORS, 2024, 24 (18)
  • [43] Learning Lexical Features of Programming Languages from Imagery Using Convolutional Neural Networks
    Ott, Jordan
    Atchison, Abigail
    Harnack, Paul
    Best, Natalie
    Anderson, Haley
    Firmani, Cristiano
    Linstead, Erik
    2018 IEEE/ACM 26TH INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2018), 2018, : 336 - 339
  • [44] Decoding Event-related Potential from Ear-EEG Signals based on Ensemble Convolutional Neural Networks in Ambulatory Environment
    Lee, Young-Eun
    Lee, Seong-Whan
    2021 9TH IEEE INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE (BCI), 2021, : 348 - 352
  • [45] Hand Gesture Recognition from 2D Images by Using Convolutional Capsule Neural Networks
    Guler, Osman
    Yucedag, Ibrahim
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2022, 47 (02) : 1211 - 1225
  • [46] Hand Gesture Recognition from 2D Images by Using Convolutional Capsule Neural Networks
    Osman Güler
    İbrahim Yücedağ
    Arabian Journal for Science and Engineering, 2022, 47 : 1211 - 1225
  • [47] Evaluation of potential auras in generalized epilepsy from EEG signals using deep convolutional neural networks and time-frequency representation
    Polat, Hasan
    Aluclu, Mehmet Ufuk
    Ozerdem, Mehmet Sirac
    BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK, 2020, 65 (04): : 379 - 391
  • [48] A survey on indoor RGB-D semantic segmentation: from hand-crafted features to deep convolutional neural networks
    Fahimeh Fooladgar
    Shohreh Kasaei
    Multimedia Tools and Applications, 2020, 79 : 4499 - 4524
  • [49] A survey on indoor RGB-D semantic segmentation: from hand-crafted features to deep convolutional neural networks
    Fooladgar, Fahimeh
    Kasaei, Shohreh
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (7-8) : 4499 - 4524
  • [50] Learning a Hand Model From Dynamic Movements Using High-Density EMG and Convolutional Neural Networks
    Simpetru, Raul C.
    Arkudas, Andreas
    Braun, Dominik I.
    Osswald, Marius
    de Oliveira, Daniela Souza
    Eskofier, Bjoern
    Kinfe, Thomas M.
    Del Vecchio, Alessandro
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2024, 71 (12) : 3556 - 3568