Pointing Gestures for Human-Robot Interaction in Service Robotics: A Feasibility Study

被引:0
作者
Pozzi, Luca [1 ]
Gandolla, Marta [2 ]
Roveda, Loris [3 ]
机构
[1] Politecn Milan, WE COBOT Lab Polo Territoriale Lecco, Mech Dept, Lecce, Italy
[2] Politecn Milan, Mech Dept, Milan, Italy
[3] Univ Svizzera Italiana USI, Scuola Univ Profess Svizzera Italiana SUPSI, Ist Dalle Molle Studi Intelligenza Artificiale ID, Lugano, Switzerland
来源
COMPUTERS HELPING PEOPLE WITH SPECIAL NEEDS, ICCHP-AAATE 2022, PT II | 2022年
关键词
Human-Robot Interaction; Pointing; Service robotics; Action detection;
D O I
10.1007/978-3-031-08645-8_54
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research in service robotics strives at having a positive impact on people's quality of life by the introduction of robotic helpers for everyday activities. From this ambition arises the need of enabling natural communication between robots and ordinary people. For this reason, Human-Robot Interaction (HRI) is an extensively investigated topic, exceeding language-based exchange of information, to include all the relevant facets of communication. Each aspect of communication (e.g. hearing, sight, touch) comes with its own peculiar strengths and limits, thus they are often combined to improve robustness and naturalness. In this contribution, an HRI framework is presented, based on pointing gestures as the preferred interaction strategy. Pointing gestures are selected as they are an innate behavior to direct another attention, and thus could represent a natural way to require a service to a robot. To complement the visual information, the user could be prompted to give voice commands to resolve ambiguities and prevent the execution of unintended actions. The two layers (perceptive and semantic) architecture of the proposed HRI system is described. The perceptive layer is responsible for objects mapping, action detection, and assessment of the indicated direction. Moreover, it has to listen to uses' voice commands. To avoid privacy issues and not burden the computational resources of the robot, the interaction would be triggered by a wake-word detection system. The semantic layer receives the information processed by the perceptive layer and determines which actions are available for the selected object. The decision is based on object's characteristics, contextual information and user vocal feedbacks are exploited to resolve ambiguities. A pilot implementation of the semantic layer is detailed, and qualitative results are shown. The preliminary findings on the validity of the proposed system, as well as on the limitations of a purely vision-based approach, are discussed.
引用
收藏
页码:461 / 468
页数:8
相关论文
共 15 条
  • [1] Commodifying Pointing in HRI: Simple and Fast Pointing Gesture Detection from RGB-D Images
    Azari, Bita
    Lim, Angelica
    Vaughan, Richard T.
    [J]. 2019 16TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV 2019), 2019, : 174 - 180
  • [2] Bolt R. A., 1980, Computer Graphics, V14, P262, DOI 10.1145/965105.807503
  • [3] Communication in Human-Robot Interaction
    Andrea Bonarini
    [J]. Current Robotics Reports, 2020, 1 (4): : 279 - 285
  • [4] Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
    Cao, Zhe
    Simon, Tomas
    Wei, Shih-En
    Sheikh, Yaser
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1302 - 1310
  • [5] Capek K., 1920, R.U.R. Rossum's Universal Robots
  • [6] Gromov B, 2019, IEEE INT CONF ROBOT, P8084, DOI [10.1109/ICRA.2019.8794399, 10.1109/icra.2019.8794399]
  • [7] A Survey of Human Action Analysis in HRI Applications
    Ji, Yanli
    Yang, Yang
    Shen, Fumin
    Shen, Heng Tao
    Li, Xuelong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (07) : 2114 - 2128
  • [8] Visual recognition of pointing gestures for human-robot interaction
    Nickel, Kai
    Stiefelhagen, Rainer
    [J]. IMAGE AND VISION COMPUTING, 2007, 25 (12) : 1875 - 1884
  • [9] A Taxonomy to Structure and Analyze Human-Robot Interaction
    Onnasch, Linda
    Roesler, Eileen
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2021, 13 (04) : 833 - 849
  • [10] Osokin D, 2018, Arxiv, DOI arXiv:1811.12004