Multimodal Interaction for Human-Robot Teams

被引:0
|
作者
Burke, Dustin [1 ]
Schurr, Nathan [1 ]
Ayers, Jeanine [1 ]
Rousseau, Jeff [1 ]
Fertitta, John [1 ]
Carlin, Alan [1 ]
Dumond, Danielle [1 ]
机构
[1] Aptima Inc, Woburn, MA 01801 USA
来源
UNMANNED SYSTEMS TECHNOLOGY XV | 2013年 / 8741卷
关键词
Human-robot interaction; human-robot teams; multimodal control; context-aware; wearable computing; autonomy; manned-unmanned teaming;
D O I
10.1117/12.2016334
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.
引用
收藏
页数:17
相关论文
共 50 条
  • [11] Knowledge acquisition through human-robot multimodal interaction
    Randelli, Gabriele
    Bonanni, Taigo Maria
    Iocchi, Luca
    Nardi, Daniele
    INTELLIGENT SERVICE ROBOTICS, 2013, 6 (01) : 19 - 31
  • [12] Multimodal Engagement Prediction in Multiperson Human-Robot Interaction
    Abdelrahman, Ahmed A.
    Strazdas, Dominykas
    Khalifa, Aly
    Hintz, Jan
    Hempel, Thorsten
    Al-Hamadi, Ayoub
    IEEE ACCESS, 2022, 10 : 61980 - 61991
  • [13] Challenges of Multimodal Interaction in the Era of Human-Robot Coexistence
    Zhang, Zhengyou
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 2 - 2
  • [14] A unified multimodal control framework for human-robot interaction
    Cherubini, Andrea
    Passama, Robin
    Fraisse, Philippe
    Crosnier, Andre
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2015, 70 : 106 - 115
  • [15] Multimodal QOL Estimation During Human-Robot Interaction
    Nakagawa, Satoshi
    Kuniyoshi, Yasuo
    2024 IEEE INTERNATIONAL CONFERENCE ON DIGITAL HEALTH, ICDH 2024, 2024, : 23 - 32
  • [16] A Multimodal Human-Robot Interaction Manager for Assistive Robots
    Abbasi, Bahareh
    Monaikul, Natawut
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6756 - 6762
  • [17] DiGeTac Unit for Multimodal Communication in Human-Robot Interaction
    Al, Gorkem Anil
    Martinez-Hernandez, Uriel
    IEEE SENSORS LETTERS, 2024, 8 (05)
  • [18] Probabilistic Multimodal Modeling for Human-Robot Interaction Tasks
    Campbell, Joseph
    Stepputtis, Simon
    Amor, Heni Ben
    ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [19] Multimodal Target Prediction for Rapid Human-Robot Interaction
    Mitra, Mukund
    Patil, Ameya
    Mothish, G. V. S.
    Kumar, Gyanig
    Mukhopadhyay, Abhishek
    Murthy, L. R. D.
    Chakrabarti, Partha Pratim
    Biswas, Pradipta
    COMPANION PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024 COMPANION, 2024, : 18 - 23
  • [20] A dialogue manager for multimodal human-robot interaction and learning of a humanoid robot
    Holzapfel, Hartwig
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2008, 35 (06): : 528 - 535