Exploring Large Virtual Environments by Thoughts Using a Brain-Computer Interface Based on Motor Imagery and High-Level Commands

被引:29
作者
Lotte, Fabien [1 ,2 ,3 ]
van Langhenhove, Aurelien [1 ,2 ,3 ]
Lamarche, Fabrice [1 ,4 ]
Ernest, Thomas [1 ,2 ,3 ]
Renard, Yann [1 ,2 ]
Arnaldi, Bruno [1 ,3 ]
Lecuyer, Anatole [1 ,2 ]
机构
[1] IRISA, Rennes, France
[2] INRIA, Rennes, France
[3] INSA Rennes, Rennes, France
[4] Univ Rennes 1, F-35014 Rennes, France
来源
PRESENCE-VIRTUAL AND AUGMENTED REALITY | 2010年 / 19卷 / 01期
关键词
EEG; COMMUNICATION; REALITY; PERFORMANCE; SELECTION;
D O I
10.1162/pres.19.1.54
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Brain-computer interfaces (BCI) are interaction devices that enable users to send commands to a computer by using brain activity only. In this paper, we propose a new interaction technique to enable users to perform complex interaction tasks and to navigate within large virtual environments (VE) by using only a BCI based on imagined movements (motor imagery). This technique enables the user to send high-level mental commands, leaving the application in charge of most of the complex and tedious details of the interaction task. More precisely, it is based on points of interest and enables subjects to send only a few commands to the application in order to navigate from one point of interest to the other. Interestingly enough, the points of interest for a given VE can be generated automatically thanks to the processing of this VE geometry. As the navigation between two points of interest is also automatic, the proposed technique can be used to navigate efficiently by thoughts within any VE. The input of this interaction technique is a newly-designed self-paced BCI which enables the user to send three different commands based on motor imagery. This BCI is based on a fuzzy inference system with reject options. In order to evaluate the efficiency of the proposed interaction technique, we compared it with the state of the art method during a task of virtual museum exploration. The state of the art method uses low-level commands, which means that each mental state of the user is associated with a simple command such as turning left or moving forward in the VE. In contrast, our method based on high-level commands enables the user to simply select its destination, leaving the application performing the necessary movements to reach this destination. Our results showed that with our interaction technique, users can navigate within a virtual museum almost twice as fast as with low-level commands, and with nearly half the commands, meaning with less stress and more comfort for the user. This suggests that our technique enables efficient use of the limited capacity of current motor imagery-based BCI in order to perform complex interaction tasks in VE, opening the way to promising new applications.
引用
收藏
页码:54 / 70
页数:17
相关论文
共 50 条
  • [1] A novel pattern with high-level commands for encoding motor imagery-based brain computer interface
    Zhang, Shuailei
    Wang, Shuai
    Zheng, Dezhi
    Zhu, Kai
    Dai, Mengxi
    PATTERN RECOGNITION LETTERS, 2019, 125 : 28 - 34
  • [2] Navigation In a Virtual Environment using Multi-class Motor Imagery Brain-Computer Interface
    Chin, Zheng Yang
    Ang, Kai Keng
    Wang, Chuanchu
    Guan, Cuntai
    2013 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE, COGNITIVE ALGORITHMS, MIND, AND BRAIN (CCMB), 2013, : 152 - 157
  • [3] Audio-cued motor imagery-based brain-computer interface: Navigation through virtual and real environments
    Velasco-Alvarez, Francisco
    Ron-Angevin, Ricardo
    da Silva-Sauer, Leandro
    Sancha-Ros, Salvador
    NEUROCOMPUTING, 2013, 121 : 89 - 98
  • [4] A Motor Imagery Based Brain-Computer Interface Speller
    Xia, Bin
    Yang, Jing
    Cheng, Conghui
    Xie, Hong
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, PT II, 2013, 7903 : 413 - 421
  • [5] Training in Realistic Virtual Environments: Impact on User Performance in a Motor Imagery-Based Brain-Computer Interface
    da Silva-Sauer, Leandro
    Valero-Aguayo, Luis
    Velasco-Alvarez, Francisco
    Varona-Moya, Sergio
    Ron-Angevin, Ricardo
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, PT I (IWANN 2015), 2015, 9094 : 78 - 88
  • [6] Using a motor imagery questionnaire to estimate the performance of a Brain-Computer Interface based on object oriented motor imagery
    Vuckovic, Aleksandra
    Osuagwu, Bethel A.
    CLINICAL NEUROPHYSIOLOGY, 2013, 124 (08) : 1586 - 1595
  • [7] Controlling an Anatomical Robot Hand Using the Brain-Computer Interface Based on Motor Imagery
    Herath, H. M. K. K. M. B.
    de Mel, W. R.
    ADVANCES IN HUMAN-COMPUTER INTERACTION, 2021, 2021
  • [8] A Predictive Speller Controlled by a Brain-Computer Interface Based on Motor Imagery
    D'Albis, Tiziano
    Blatt, Rossella
    Tedesco, Roberto
    Sbattella, Licia
    Matteucci, Matteo
    ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION, 2012, 19 (03)
  • [9] Effects of Gaze Fixation on the Performance of a Motor Imagery-Based Brain-Computer Interface
    Meng, Jianjun
    Wu, Zehan
    Li, Songwei
    Zhu, Xiangyang
    FRONTIERS IN HUMAN NEUROSCIENCE, 2022, 15
  • [10] Towards increasing the number of commands in a hybrid brain-computer interface with combination of gaze and motor imagery
    Meena, Yogesh Kumar
    Cecotti, Hubert
    Wong-Lin, KongFatt
    Prasad, Girijesh
    2015 37TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2015, : 506 - 509