Multimodal Path Planning Using Potential Field for Human-Robot Interaction

被引:0
|
作者
Kawasaki, Yosuke [1 ]
Yorozu, Ayanori [2 ]
Takahashi, Masaki [1 ]
机构
[1] Keio Univ, Dept Syst Design Engn, Kohoku Ku, 3-14-1 Hiyoshi, Yokohama, Kanagawa 2238522, Japan
[2] Keio Univ, Grad Sch Sci & Technol, Kohoku Ku, 3-14-1 Hiyoshi, Yokohama, Kanagawa 2238522, Japan
来源
INTELLIGENT AUTONOMOUS SYSTEMS 15, IAS-15 | 2019年 / 867卷
基金
日本科学技术振兴机构;
关键词
Human-robot interaction; Multimodal path planning; Potential field;
D O I
10.1007/978-3-030-01370-7_47
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In a human-robot interaction, a robot must move to a position where the robot can obtain precise information of people, such as positions, postures, and voice. This is because the accuracy of human recognition depends on the positional relation between the person and robot. In addition, the robot should choose what sensor data needs to be focused on during the task that involves the interaction. Therefore, we should change a path approaching the people to improve human recognition accuracy for ease of performing the task. Accordingly, we need to design a path-planning method considering sensor characteristics, human recognition accuracy, and the task contents simultaneously. Although some previous studies proposed path-planning methods considering sensor characteristics, they did not consider the task and the human recognition accuracy, which was important for practical application. Consequently, we present a path-planning method considering the multimodal information which fusion the task contents and the human recognition accuracy simultaneously.
引用
收藏
页码:597 / 609
页数:13
相关论文
共 50 条
  • [1] Multimodal Probabilistic Model-Based Planning for Human-Robot Interaction
    Schmerling, Edward
    Leung, Karen
    Vollprecht, Wolf
    Pavone, Marco
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 3399 - 3406
  • [2] Multimodal Interaction for Human-Robot Teams
    Burke, Dustin
    Schurr, Nathan
    Ayers, Jeanine
    Rousseau, Jeff
    Fertitta, John
    Carlin, Alan
    Dumond, Danielle
    UNMANNED SYSTEMS TECHNOLOGY XV, 2013, 8741
  • [3] Decision-Theoretic Planning Under Uncertainty for Multimodal Human-Robot Interaction
    Garcia, Joao A.
    Lima, Pedro U.
    Veiga, Tiago
    2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 779 - 784
  • [4] Safe planning for human-robot interaction
    Kilic, D
    Croft, EA
    JOURNAL OF ROBOTIC SYSTEMS, 2005, 22 (07): : 383 - 396
  • [5] Recent advancements in multimodal human-robot interaction
    Su, Hang
    Qi, Wen
    Chen, Jiahao
    Yang, Chenguang
    Sandoval, Juan
    Laribi, Med Amine
    FRONTIERS IN NEUROROBOTICS, 2023, 17
  • [6] Safe planning for human-robot interaction
    Kulic, D
    Croft, E
    2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, : 1882 - 1887
  • [7] A Dialogue System for Multimodal Human-Robot Interaction
    Lucignano, Lorenzo
    Cutugno, Francesco
    Rossi, Silvia
    Finzi, Alberto
    ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 197 - 204
  • [8] Multimodal Information Fusion for Human-Robot Interaction
    Luo, Ren C.
    Wu, Y. C.
    Lin, P. H.
    2015 IEEE 10TH JUBILEE INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2015, : 535 - 540
  • [9] Improving Human-Robot Interaction by a Multimodal Interface
    Ubeda, Andres
    Ianez, Eduardo
    Azorin, Jose M.
    Sabater, Jose M.
    Garcia, Nicolas M.
    Perez, Carlos
    IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010, : 3580 - 3585
  • [10] Affective Human-Robot Interaction with Multimodal Explanations
    Zhu, Hongbo
    Yu, Chuang
    Cangelosi, Angelo
    SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 241 - 252