Teleoperation through Eye Gaze (Tele Gaze): A Multimodal Approach

被引:17
作者
Omer, Hemin [1 ]
Sherkat, Nasser [1 ]
Lotfi, Ahmad [2 ]
机构
[1] Nottingham Trent Univ, Sch Sci & Technol, Nottingham, England
[2] Nottingham Trent Univ, Dept Comp & Technol, Nottingham, England
来源
2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO 2009), VOLS 1-4 | 2009年
关键词
Teleoperation Interfaces; Mobile Robot Navigation; Human-Robot Interaction; Eye-Gaze Tracking; Intelligent User Interfaces; TRACKING;
D O I
10.1109/ROBIO.2009.5420585
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most mobile robot teleoperation require monitoring as well as controlling from a remote location. This engages both the hands and the eyes of the human operator for the whole duration of the operation. Aiming at minimizing the human body engagement by freeing the hands of the operator from the controlling task, previous works by the authors proved that both monitoring and controlling can be achieved using solely inputs from the operator's eyes. The Tele Gaze interface, which has been developed as a novel interface for teleoperation through eye gaze, enabled the operator to control a robotic platform, an onboard active vision and some aspects of the interface using eye-gaze tracking. However, the advantage of free hands using Tele Gaze was accompanied by an increase in the task's general workload causing some extra stress on the operator. In the aim of optimizing the human-robot interaction experience while using Tele Gaze, a multimodal approach is believed to be necessary. Therefore, in addition to some further refinements in the design of the interface a multimodal Tele Gaze system has also been developed. In this paper the multimodal version of Tele Gaze using an accelerator pedal and the native version of Tele Gaze using dwell time are both presented. Details of the refinements in the design of the interface and the results of a task-oriented evaluation for three different modes of interaction, including both modes of Tele Gaze, are also included.
引用
收藏
页码:711 / +
页数:2
相关论文
共 14 条
[1]   Intuitive interface through active 3D gaze tracking [J].
Atienza, R ;
Zelinsky, A .
Proceedings of the 2005 International Conference on Active Media Technology (AMT 2005), 2005, :16-21
[2]   Active gaze tracking for human-robot interaction [J].
Atienza, R ;
Zelinsky, A .
FOURTH IEEE INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, PROCEEDINGS, 2002, :261-266
[3]  
ATIENZA R, 2005, ROB RES 11 INT S
[4]  
Bhuiyan MA, 2004, INT J ROBOT AUTOM, V19, P42, DOI 10.2316/Journal.206.2004.1.206-2605
[5]  
Dautenhahn K., 2007, Int. J. Adv. Robot. Syst., V4, P103, DOI [DOI 10.5772/5702, 10.5772/5702]
[6]   A breadth-first survey of eye-tracking applications [J].
Duchowski, AT .
BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS, 2002, 34 (04) :455-470
[7]   Fixation precision in high-speed noncontact eye-gaze tracking [J].
Hennessey, Craig ;
Noureddin, Borna ;
Lawrence, Peter .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2008, 38 (02) :289-298
[8]  
Jacob R. J. K., 1990, Proceedings of the SIGCHI conference on Human factors in computing systems, P11, DOI [DOI 10.1145/97243.97246, https://doi.org/10.1145/97243.97246]
[9]  
Jarvis R, 2003, SPRINGER TRAC ADV RO, V5, P624
[10]  
LATIF HO, 2008, P SPIE INT SOC OPTIC, V7112