Multimodal Path Planning Using Potential Field for Human-Robot Interaction

被引:0
|
作者
Kawasaki, Yosuke [1 ]
Yorozu, Ayanori [2 ]
Takahashi, Masaki [1 ]
机构
[1] Keio Univ, Dept Syst Design Engn, Kohoku Ku, 3-14-1 Hiyoshi, Yokohama, Kanagawa 2238522, Japan
[2] Keio Univ, Grad Sch Sci & Technol, Kohoku Ku, 3-14-1 Hiyoshi, Yokohama, Kanagawa 2238522, Japan
来源
INTELLIGENT AUTONOMOUS SYSTEMS 15, IAS-15 | 2019年 / 867卷
基金
日本科学技术振兴机构;
关键词
Human-robot interaction; Multimodal path planning; Potential field;
D O I
10.1007/978-3-030-01370-7_47
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In a human-robot interaction, a robot must move to a position where the robot can obtain precise information of people, such as positions, postures, and voice. This is because the accuracy of human recognition depends on the positional relation between the person and robot. In addition, the robot should choose what sensor data needs to be focused on during the task that involves the interaction. Therefore, we should change a path approaching the people to improve human recognition accuracy for ease of performing the task. Accordingly, we need to design a path-planning method considering sensor characteristics, human recognition accuracy, and the task contents simultaneously. Although some previous studies proposed path-planning methods considering sensor characteristics, they did not consider the task and the human recognition accuracy, which was important for practical application. Consequently, we present a path-planning method considering the multimodal information which fusion the task contents and the human recognition accuracy simultaneously.
引用
收藏
页码:597 / 609
页数:13
相关论文
共 50 条
  • [31] Creating Dictionary for the Field of Human-Robot Interaction
    Machova, K.
    Tarhanicova, M.
    Sincak, P.
    INES 2015 - IEEE 19TH INTERNATIONAL CONFERENCE ON INTELLIGENT ENGINEERING SYSTEMS, 2015, : 441 - 444
  • [32] Human-Robot Interaction and Collaborative Manipulation with Multimodal Perception Interface for Human
    Huang, Shouren
    Ishikawa, Masatoshi
    Yamakawa, Yuji
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION (HAI'19), 2019, : 289 - 291
  • [33] Optimal Robot Path Planning In a Collaborative Human-Robot Team with Intermittent Human Availability
    Dahiya, Abhinav
    Smith, Stephen L.
    2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 1048 - 1055
  • [34] Multimodal Human-Robot Interaction from the Perspective of a Speech Scientist
    Rigoll, Gerhard
    SPEECH AND COMPUTER (SPECOM 2015), 2015, 9319 : 3 - 10
  • [35] Comparing alternative modalities in the context of multimodal human-robot interaction
    Saren, Suprakas
    Mukhopadhyay, Abhishek
    Ghose, Debasish
    Biswas, Pradipta
    JOURNAL ON MULTIMODAL USER INTERFACES, 2024, 18 (01) : 69 - 85
  • [36] Research on multimodal human-robot interaction based on speech and gesture
    Deng Yongda
    Li Fang
    Xin Huang
    COMPUTERS & ELECTRICAL ENGINEERING, 2018, 72 : 443 - 454
  • [37] Multimodal emotion recognition with evolutionary computation for human-robot interaction
    Perez-Gaspar, Luis-Alberto
    Caballero-Morales, Santiago-Omar
    Trujillo-Romero, Felipe
    EXPERT SYSTEMS WITH APPLICATIONS, 2016, 66 : 42 - 61
  • [38] Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction
    Aly, Amir
    Tapus, Adriana
    SERVICE ORIENTATION IN HOLONIC AND MULTI-AGENT MANUFACTURING CONTROL, 2012, 402 : 183 - 196
  • [39] Evaluations of embedded Modules dedicated to multimodal Human-Robot Interaction
    Burger, Brice
    Lerasle, Frederic
    Ferrane, Isabelle
    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2009, : 341 - +
  • [40] Multimodal Approach to Affective Human-Robot Interaction Design with Children
    Okita, Sandra Y.
    Ng-Thow-Hing, Victor
    Sarvadevabhatla, Ravi K.
    ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2011, 1 (01)