Multimodal Interaction for Human-Robot Teams

被引:0
作者
Burke, Dustin [1 ]
Schurr, Nathan [1 ]
Ayers, Jeanine [1 ]
Rousseau, Jeff [1 ]
Fertitta, John [1 ]
Carlin, Alan [1 ]
Dumond, Danielle [1 ]
机构
[1] Aptima Inc, Woburn, MA 01801 USA
来源
UNMANNED SYSTEMS TECHNOLOGY XV | 2013年 / 8741卷
关键词
Human-robot interaction; human-robot teams; multimodal control; context-aware; wearable computing; autonomy; manned-unmanned teaming;
D O I
10.1117/12.2016334
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] A Survey of Multimodal Perception Methods for Human-Robot Interaction in Social Environments
    Duncan, John A.
    Alambeigi, Farshid
    Pryor, Mitchell W.
    ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION, 2024, 13 (04)
  • [32] Multimodal control for human-robot cooperation
    Cherubini, Andrea
    Passama, Robin
    Meline, Arnaud
    Crosnier, Andre
    Fraisse, Philippe
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 2202 - 2207
  • [33] Metrics for Human-Robot Team Design: A Teamwork Perspective on Evaluation of Human-Robot Teams
    Ma, Lanssie Mingyue
    Ijtsma, Martijn
    Feigh, Karen M.
    Pritchett, Amy R.
    ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION, 2022, 11 (03)
  • [34] Multimodal Interface for Human-Robot Collaboration
    Rautiainen, Samu
    Pantano, Matteo
    Traganos, Konstantinos
    Ahmadi, Seyedamir
    Saenz, Jose
    Mohammed, Wael M.
    Lastra, Jose L. Martinez
    MACHINES, 2022, 10 (10)
  • [35] Using Embodied Multimodal Fusion to Perform Supportive and Instructive Robot Roles in Human-Robot Interaction
    Manuel Giuliani
    Alois Knoll
    International Journal of Social Robotics, 2013, 5 : 345 - 356
  • [36] On Interaction Quality in Human-Robot Interaction
    Bensch, Suna
    Jevtic, Aleksandar
    Hellstrom, Thomas
    ICAART: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 1, 2017, : 182 - 189
  • [37] Using Embodied Multimodal Fusion to Perform Supportive and Instructive Robot Roles in Human-Robot Interaction
    Giuliani, Manuel
    Knoll, Alois
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2013, 5 (03) : 345 - 356
  • [38] Development and Testing of a Multimodal Acquisition Platform for Human-Robot Interaction Affective Studies
    Lazzeri, Nicole
    Mazzei, Daniele
    De Rossi, Danilo
    JOURNAL OF HUMAN-ROBOT INTERACTION, 2014, 3 (02): : 1 - 24
  • [39] Human-Robot Proxemics: Physical and Psychological Distancing in Human-Robot Interaction
    Mumm, Jonathan
    Mutlu, Bilge
    PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011), 2011, : 331 - 338
  • [40] Evaluation of Unimodal and Multimodal Communication Cues for Attracting Attention in Human-Robot Interaction
    Torta, Elena
    van Heumen, Jim
    Piunti, Francesco
    Romeo, Luca
    Cuijpers, Raymond
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2015, 7 (01) : 89 - 96