Symmetric Evaluation of Multimodal Human-Robot Interaction with Gaze and Standard Control

被引:3
|
作者
Jones, Ethan R. [1 ]
Chinthammit, Winyu [1 ]
Huang, Weidong [2 ]
Engelke, Ulrich [3 ]
Lueg, Christopher [1 ]
机构
[1] Univ Tasmania, Sch Technol Environm & Design, Hobart, Tas 7005, Australia
[2] Swinburne Univ Technol, Sch Software & Elect Engn, Hawthorn, Vic 3122, Australia
[3] CSIRO, Kensington, WA 6020, Australia
来源
SYMMETRY-BASEL | 2018年 / 10卷 / 12期
关键词
multimodal interaction; eye tracking; empirical evaluation; human-robot interaction; COGNITIVE LOAD; TRACKING; DESIGN;
D O I
10.3390/sym10120680
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Control of robot arms is often required in engineering and can be performed by using different methods. This study examined and symmetrically compared the use of a controller, eye gaze tracker and a combination thereof in a multimodal setup for control of a robot arm. Tasks of different complexities were defined and twenty participants completed an experiment using these interaction modalities to solve the tasks. More specifically, there were three tasks: the first was to navigate a chess piece from a square to another pre-specified square; the second was the same as the first task, but required more moves to complete; and the third task was to move multiple pieces to reach a solution to a pre-defined arrangement of the pieces. Further, while gaze control has the potential to be more intuitive than a hand controller, it suffers from limitations with regard to spatial accuracy and target selection. The multimodal setup aimed to mitigate the weaknesses of the eye gaze tracker, creating a superior system without simply relying on the controller. The experiment shows that the multimodal setup improves performance over the eye gaze tracker alone (p < 0.05) and was competitive with the controller only setup, although did not outperform it (p > 0.05).
引用
收藏
页数:15
相关论文
共 50 条
  • [21] User feedback in human-robot interaction: Prosody, gaze and timing
    Skantze, Gabriel
    Oertel, Catharine
    Hjalmarsson, Anna
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 1900 - 1904
  • [22] Perception and Evaluation in Human-Robot Interaction: The Human-Robot Interaction Evaluation Scale (HRIES)-A Multicomponent Approach of Anthropomorphism
    Spatola, Nicolas
    Kuhnlenz, Barbara
    Cheng, Gordon
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2021, 13 (07) : 1517 - 1539
  • [23] Multimodal Interface for Human-Robot Collaboration
    Rautiainen, Samu
    Pantano, Matteo
    Traganos, Konstantinos
    Ahmadi, Seyedamir
    Saenz, Jose
    Mohammed, Wael M.
    Lastra, Jose L. Martinez
    MACHINES, 2022, 10 (10)
  • [24] Evaluation of Robot Emotion Expressions for Human-Robot Interaction
    Cardenas, Pedro
    Garcia, Jose
    Begazo, Rolinson
    Aguilera, Ana
    Dongo, Irvin
    Cardinale, Yudith
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2024, : 2019 - 2041
  • [25] Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction
    Morillo-Mendez, Lucas
    Schrooten, Martien G. S.
    Loutfi, Amy
    Mozos, Oscar Martinez
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2024, 16 (06) : 1069 - 1081
  • [26] Human-Robot Interaction and Collaborative Manipulation with Multimodal Perception Interface for Human
    Huang, Shouren
    Ishikawa, Masatoshi
    Yamakawa, Yuji
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION (HAI'19), 2019, : 289 - 291
  • [27] Multimodal Human-Robot Interaction from the Perspective of a Speech Scientist
    Rigoll, Gerhard
    SPEECH AND COMPUTER (SPECOM 2015), 2015, 9319 : 3 - 10
  • [28] Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
    Zhao, Xiyuan
    Li, Huijun
    Miao, Tianyuan
    Zhu, Xianyi
    Wei, Zhikai
    Tan, Lifen
    Song, Aiguo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7819 - 7826
  • [29] Multimodal Human-Robot Interaction for Walker-Assisted Gait
    Cifuentes, Carlos A.
    Rodriguez, Camilo
    Frizera-Neto, Anselmo
    Bastos-Filho, Teodiano Freire
    Carelli, Ricardo
    IEEE SYSTEMS JOURNAL, 2016, 10 (03): : 933 - 943
  • [30] A Multimodal Emotion Detection System during Human-Robot Interaction
    Alonso-Martin, Fernando
    Malfaz, Maria
    Sequeira, Joao
    Gorostiza, Javier F.
    Salichs, Miguel A.
    SENSORS, 2013, 13 (11) : 15549 - 15581