Symmetric Evaluation of Multimodal Human-Robot Interaction with Gaze and Standard Control

被引:3
|
作者
Jones, Ethan R. [1 ]
Chinthammit, Winyu [1 ]
Huang, Weidong [2 ]
Engelke, Ulrich [3 ]
Lueg, Christopher [1 ]
机构
[1] Univ Tasmania, Sch Technol Environm & Design, Hobart, Tas 7005, Australia
[2] Swinburne Univ Technol, Sch Software & Elect Engn, Hawthorn, Vic 3122, Australia
[3] CSIRO, Kensington, WA 6020, Australia
来源
SYMMETRY-BASEL | 2018年 / 10卷 / 12期
关键词
multimodal interaction; eye tracking; empirical evaluation; human-robot interaction; COGNITIVE LOAD; TRACKING; DESIGN;
D O I
10.3390/sym10120680
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Control of robot arms is often required in engineering and can be performed by using different methods. This study examined and symmetrically compared the use of a controller, eye gaze tracker and a combination thereof in a multimodal setup for control of a robot arm. Tasks of different complexities were defined and twenty participants completed an experiment using these interaction modalities to solve the tasks. More specifically, there were three tasks: the first was to navigate a chess piece from a square to another pre-specified square; the second was the same as the first task, but required more moves to complete; and the third task was to move multiple pieces to reach a solution to a pre-defined arrangement of the pieces. Further, while gaze control has the potential to be more intuitive than a hand controller, it suffers from limitations with regard to spatial accuracy and target selection. The multimodal setup aimed to mitigate the weaknesses of the eye gaze tracker, creating a superior system without simply relying on the controller. The experiment shows that the multimodal setup improves performance over the eye gaze tracker alone (p < 0.05) and was competitive with the controller only setup, although did not outperform it (p > 0.05).
引用
收藏
页数:15
相关论文
共 50 条
  • [1] A unified multimodal control framework for human-robot interaction
    Cherubini, Andrea
    Passama, Robin
    Fraisse, Philippe
    Crosnier, Andre
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2015, 70 : 106 - 115
  • [2] Comparing alternative modalities in the context of multimodal human-robot interaction
    Saren, Suprakas
    Mukhopadhyay, Abhishek
    Ghose, Debasish
    Biswas, Pradipta
    JOURNAL ON MULTIMODAL USER INTERFACES, 2024, 18 (01) : 69 - 85
  • [3] Enabling multimodal human-robot interaction for the Karlsruhe humanoid robot
    Stiefelhagen, Rainer
    Ekenel, Hazim Kemal
    Fugen, Christian
    Gieselmann, Petra
    Holzapfel, Hartwig
    Kraft, Florian
    Nickel, Kai
    Voit, Michael
    Waibel, Alex
    IEEE TRANSACTIONS ON ROBOTICS, 2007, 23 (05) : 840 - 851
  • [4] Multimodal Interaction for Human-Robot Teams
    Burke, Dustin
    Schurr, Nathan
    Ayers, Jeanine
    Rousseau, Jeff
    Fertitta, John
    Carlin, Alan
    Dumond, Danielle
    UNMANNED SYSTEMS TECHNOLOGY XV, 2013, 8741
  • [5] Robot Gaze Behavior Affects Honesty in Human-Robot Interaction
    Schellen, Elef
    Bossi, Francesco
    Wykowska, Agnieszka
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2021, 4
  • [6] Multimodal control for human-robot cooperation
    Cherubini, Andrea
    Passama, Robin
    Meline, Arnaud
    Crosnier, Andre
    Fraisse, Philippe
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 2202 - 2207
  • [7] Recent advancements in multimodal human-robot interaction
    Su, Hang
    Qi, Wen
    Chen, Jiahao
    Yang, Chenguang
    Sandoval, Juan
    Laribi, Med Amine
    FRONTIERS IN NEUROROBOTICS, 2023, 17
  • [8] Integrating a multimodal human-robot interaction method into a multi-robot control station
    Trouvain, BA
    Schneider, FE
    Wildermuth, D
    ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, 2001, : 468 - 472
  • [9] Implementation of Gaze Estimation in Dialogue to Human-Robot Interaction
    Somashekarappa, Vidya
    2022 10TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS, ACIIW, 2022,
  • [10] Gaze Cueing and the Role of Presence in Human-Robot Interaction
    Friebe, Kassandra
    Samporova, Sabina
    Malinovska, Kristina
    Hoffimann, Matej
    SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 402 - 414