Pointing Gestures for Human-Robot Interaction in Service Robotics: A Feasibility Study

被引:0
|
作者
Pozzi, Luca [1 ]
Gandolla, Marta [2 ]
Roveda, Loris [3 ]
机构
[1] Politecn Milan, WE COBOT Lab Polo Territoriale Lecco, Mech Dept, Lecce, Italy
[2] Politecn Milan, Mech Dept, Milan, Italy
[3] Univ Svizzera Italiana USI, Scuola Univ Profess Svizzera Italiana SUPSI, Ist Dalle Molle Studi Intelligenza Artificiale ID, Lugano, Switzerland
来源
COMPUTERS HELPING PEOPLE WITH SPECIAL NEEDS, ICCHP-AAATE 2022, PT II | 2022年
关键词
Human-Robot Interaction; Pointing; Service robotics; Action detection;
D O I
10.1007/978-3-031-08645-8_54
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research in service robotics strives at having a positive impact on people's quality of life by the introduction of robotic helpers for everyday activities. From this ambition arises the need of enabling natural communication between robots and ordinary people. For this reason, Human-Robot Interaction (HRI) is an extensively investigated topic, exceeding language-based exchange of information, to include all the relevant facets of communication. Each aspect of communication (e.g. hearing, sight, touch) comes with its own peculiar strengths and limits, thus they are often combined to improve robustness and naturalness. In this contribution, an HRI framework is presented, based on pointing gestures as the preferred interaction strategy. Pointing gestures are selected as they are an innate behavior to direct another attention, and thus could represent a natural way to require a service to a robot. To complement the visual information, the user could be prompted to give voice commands to resolve ambiguities and prevent the execution of unintended actions. The two layers (perceptive and semantic) architecture of the proposed HRI system is described. The perceptive layer is responsible for objects mapping, action detection, and assessment of the indicated direction. Moreover, it has to listen to uses' voice commands. To avoid privacy issues and not burden the computational resources of the robot, the interaction would be triggered by a wake-word detection system. The semantic layer receives the information processed by the perceptive layer and determines which actions are available for the selected object. The decision is based on object's characteristics, contextual information and user vocal feedbacks are exploited to resolve ambiguities. A pilot implementation of the semantic layer is detailed, and qualitative results are shown. The preliminary findings on the validity of the proposed system, as well as on the limitations of a purely vision-based approach, are discussed.
引用
收藏
页码:461 / 468
页数:8
相关论文
共 50 条
  • [41] Analysis of human-robot interaction at the DARPA Robotics Challenge Finals
    Norton, Adam
    Ober, Willard
    Baraniecki, Lisa
    McCann, Eric
    Scholtz, Jean
    Shane, David
    Skinner, Anna
    Watson, Robert
    Yanco, Holly
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (5-7): : 483 - 513
  • [42] ArmSym: A Virtual Human-Robot Interaction Laboratory for Assistive Robotics
    Bustamante, Samuel
    Peters, Jan
    Schoelkopf, Bernhard
    Grosse-Wentrup, Moritz
    Jayaram, Vinay
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2021, 51 (06) : 568 - 577
  • [43] Fog Robotics for Efficient, Fluent and Robust Human-Robot Interaction
    Gudi, Siva Leela Krishna Chand
    Ojha, Suman
    Johnston, Benjamin
    Clark, Jesse
    Williams, Mary-Anne
    2018 IEEE 17TH INTERNATIONAL SYMPOSIUM ON NETWORK COMPUTING AND APPLICATIONS (NCA), 2018,
  • [44] A framework for safe and intuitive human-robot interaction for assistant robotics
    Cheng, Pangcheng David Cen
    Sibona, Fiorella
    Indri, Marina
    2022 IEEE 27TH INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2022,
  • [45] Cooperative gestures for industry: Exploring the efficacy of robot hand configurations in expression of instructional gestures for human-robot interaction
    Sheikholeslami, Sara
    Moon, AJung
    Croft, Elizabeth A.
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (5-7): : 699 - 720
  • [46] Robotics in place and the places of robotics: productive tensions across human geography and human-robot interaction
    Lynch, Casey R.
    Manalo, Bethany N.
    Munoz-Viso, Alex
    AI & SOCIETY, 2024, : 1361 - 1374
  • [47] Designing a human-robot interaction framework for home service robot
    Lee, KW
    Kim, HR
    Yoon, WC
    Yoon, YS
    Kwon, DS
    2005 IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2005, : 286 - 293
  • [48] Natural interface using pointing behavior for human-robot gestural interaction
    Sato, Eri
    Yamaguchi, Toru
    Harashima, Fumio
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2007, 54 (02) : 1105 - 1112
  • [49] Learning, Generating and Adapting Wave Gestures for Expressive Human-Robot Interaction
    Panteris, Michail
    Manschitz, Simon
    Calinon, Sylvain
    HRI'20: COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2020, : 386 - 388
  • [50] Recognizing pointing behavior using humatronics oriented human-robot interaction
    Yamaguchi, Toru
    Sato, Eri
    Sakurai, Shoichiro
    IECON 2007: 33RD ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-3, CONFERENCE PROCEEDINGS, 2007, : 4 - 9