Multimodal Interaction for Human-Robot Teams

被引:0
作者
Burke, Dustin [1 ]
Schurr, Nathan [1 ]
Ayers, Jeanine [1 ]
Rousseau, Jeff [1 ]
Fertitta, John [1 ]
Carlin, Alan [1 ]
Dumond, Danielle [1 ]
机构
[1] Aptima Inc, Woburn, MA 01801 USA
来源
UNMANNED SYSTEMS TECHNOLOGY XV | 2013年 / 8741卷
关键词
Human-robot interaction; human-robot teams; multimodal control; context-aware; wearable computing; autonomy; manned-unmanned teaming;
D O I
10.1117/12.2016334
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Evaluation of Unimodal and Multimodal Communication Cues for Attracting Attention in Human-Robot Interaction
    Torta, Elena
    van Heumen, Jim
    Piunti, Francesco
    Romeo, Luca
    Cuijpers, Raymond
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2015, 7 (01) : 89 - 96
  • [42] Physical Collaboration of Human-Human and Human-Robot Teams
    Reed, Kyle B.
    Peshkin, Michael A.
    IEEE TRANSACTIONS ON HAPTICS, 2008, 1 (02) : 108 - 120
  • [43] Expressiveness in human-robot interaction
    Marti, Patrizia
    Giusti, Leonardo
    Pollini, Alessandro
    Rullo, Alessia
    INTERACTION DESIGN AND ARCHITECTURES, 2008, (5-6) : 93 - 98
  • [44] Communication in Human-Robot Interaction
    Andrea Bonarini
    Current Robotics Reports, 2020, 1 (4): : 279 - 285
  • [45] The Science of Human-Robot Interaction
    Kiesler, Sara
    Goodrich, Michael A.
    ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION, 2018, 7 (01)
  • [46] A multimodal human-robot sign language interaction framework applied in social robots
    Li, Jie
    Zhong, Junpei
    Wang, Ning
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [47] Active Inference Through Energy Minimization in Multimodal Affective Human-Robot Interaction
    Horii, Takato
    Nagai, Yukie
    FRONTIERS IN ROBOTICS AND AI, 2021, 8
  • [48] Sound in Human-Robot Interaction
    Pelikan, Hannah
    Robinson, Frederic Anthony
    Keevallik, Leelo
    Velonaki, Mari
    Broth, Mathias
    Bown, Oliver
    HRI '21: COMPANION OF THE 2021 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2021, : 706 - 708
  • [49] Semiotics and human-robot interaction
    Sequeira, Joao Silva
    Ribeiro, Maria Isabel
    ICINCO 2006: PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS: ROBOTICS AND AUTOMATION, 2006, : 58 - 65
  • [50] Immersive Human-Robot Interaction
    Sandygulova, Anara
    Campbell, Abraham G.
    Dragone, Mauro
    O'Hare, G. M. P.
    HRI'12: PROCEEDINGS OF THE SEVENTH ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2012, : 227 - 228