Exploring Large Virtual Environments by Thoughts Using a Brain-Computer Interface Based on Motor Imagery and High-Level Commands

被引:29
作者
Lotte, Fabien [1 ,2 ,3 ]
van Langhenhove, Aurelien [1 ,2 ,3 ]
Lamarche, Fabrice [1 ,4 ]
Ernest, Thomas [1 ,2 ,3 ]
Renard, Yann [1 ,2 ]
Arnaldi, Bruno [1 ,3 ]
Lecuyer, Anatole [1 ,2 ]
机构
[1] IRISA, Rennes, France
[2] INRIA, Rennes, France
[3] INSA Rennes, Rennes, France
[4] Univ Rennes 1, F-35014 Rennes, France
来源
PRESENCE-VIRTUAL AND AUGMENTED REALITY | 2010年 / 19卷 / 01期
关键词
EEG; COMMUNICATION; REALITY; PERFORMANCE; SELECTION;
D O I
10.1162/pres.19.1.54
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Brain-computer interfaces (BCI) are interaction devices that enable users to send commands to a computer by using brain activity only. In this paper, we propose a new interaction technique to enable users to perform complex interaction tasks and to navigate within large virtual environments (VE) by using only a BCI based on imagined movements (motor imagery). This technique enables the user to send high-level mental commands, leaving the application in charge of most of the complex and tedious details of the interaction task. More precisely, it is based on points of interest and enables subjects to send only a few commands to the application in order to navigate from one point of interest to the other. Interestingly enough, the points of interest for a given VE can be generated automatically thanks to the processing of this VE geometry. As the navigation between two points of interest is also automatic, the proposed technique can be used to navigate efficiently by thoughts within any VE. The input of this interaction technique is a newly-designed self-paced BCI which enables the user to send three different commands based on motor imagery. This BCI is based on a fuzzy inference system with reject options. In order to evaluate the efficiency of the proposed interaction technique, we compared it with the state of the art method during a task of virtual museum exploration. The state of the art method uses low-level commands, which means that each mental state of the user is associated with a simple command such as turning left or moving forward in the VE. In contrast, our method based on high-level commands enables the user to simply select its destination, leaving the application performing the necessary movements to reach this destination. Our results showed that with our interaction technique, users can navigate within a virtual museum almost twice as fast as with low-level commands, and with nearly half the commands, meaning with less stress and more comfort for the user. This suggests that our technique enables efficient use of the limited capacity of current motor imagery-based BCI in order to perform complex interaction tasks in VE, opening the way to promising new applications.
引用
收藏
页码:54 / 70
页数:17
相关论文
共 50 条
  • [41] Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface
    LaFleur, Karl
    Cassady, Kaitlin
    Doud, Alexander
    Shades, Kaleb
    Rogin, Eitan
    He, Bin
    JOURNAL OF NEURAL ENGINEERING, 2013, 10 (04)
  • [42] A Novel Classification Framework Using the Graph Representations of Electroencephalogram for Motor Imagery Based Brain-Computer Interface
    Jin, Jing
    Sun, Hao
    Daly, Ian
    Li, Shurui
    Liu, Chang
    Wang, Xingyu
    Cichocki, Andrzej
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2022, 30 : 20 - 29
  • [43] Modulation of sensorimotor rhythms for brain-computer interface using motor imagery with online feedback
    Abdalsalam, Eltaf
    Yusoff, Mohd Zuki
    Malik, Aamir
    Kamel, Nidal S.
    Mahmoud, Dalia
    SIGNAL IMAGE AND VIDEO PROCESSING, 2018, 12 (03) : 557 - 564
  • [44] Modulation of sensorimotor rhythms for brain-computer interface using motor imagery with online feedback
    Eltaf Abdalsalam
    Mohd Zuki Yusoff
    Aamir Malik
    Nidal S. Kamel
    Dalia Mahmoud
    Signal, Image and Video Processing, 2018, 12 : 557 - 564
  • [45] Towards Classifying Motor Imagery Using a Consumer-Grade Brain-Computer Interface
    Wang, Ganyu
    Martin, Miguel Vargas
    Hung, Patrick C. K.
    MacDonald, Shane
    2019 IEEE INTERNATIONAL CONFERENCE ON COGNITIVE COMPUTING (IEEE ICCC 2019), 2019, : 67 - 69
  • [46] Effect of a Brain-Computer Interface Based on Pedaling Motor Imagery on Cortical Excitability and Connectivity
    Cardoso, Vivianne Flavia
    Delisle-Rodriguez, Denis
    Romero-Laiseca, Maria Alejandra
    Loterio, Flavia A.
    Gurve, Dharmendra
    Floriano, Alan
    Valadao, Carlos
    Silva, Leticia
    Krishnan, Sridhar
    Frizera-Neto, Anselmo
    Freire Bastos-Filho, Teodiano
    SENSORS, 2021, 21 (06) : 1 - 13
  • [47] Effect of instructive visual stimuli on neurofeedback training for motor imagery-based brain-computer interface
    Kondo, Toshiyuki
    Saeki, Midori
    Hayashi, Yoshikatsu
    Nakayashiki, Kosei
    Takata, Yohei
    HUMAN MOVEMENT SCIENCE, 2015, 43 : 239 - 249
  • [48] Development of a Motor Imagery Based Brain-computer Interface for Humanoid Robot Control Applications
    Prakaksita, Narendra
    Kuo, Chen-Yun
    Kuo, Chung-Hsien
    PROCEEDINGS 2016 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2016, : 1607 - 1613
  • [49] Weighted Transfer Learning for Improving Motor Imagery-Based Brain-Computer Interface
    Azab, Ahmed M.
    Mihaylova, Lyudmila
    Ang, Kai Keng
    Arvaneh, Mahnaz
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2019, 27 (07) : 1352 - 1359
  • [50] Performance of Motor Imagery Brain-Computer Interface Based on Anodal Transcranial Direct Current Stimulation Modulation
    Wei, Pengfei
    He, Wei
    Zhou, Yi
    Wang, Liping
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2013, 21 (03) : 404 - 415