An Integrative Framework of Human Hand Gesture Segmentation for Human-Robot Interaction

被引:40
作者
Ju, Zhaojie [1 ]
Ji, Xiaofei [2 ]
Li, Jing [3 ,4 ]
Liu, Honghai [1 ]
机构
[1] Univ Portsmouth, Sch Comp, Portsmouth PO1 2UP, Hants, England
[2] Shenyang Aerosp Univ, Sch Automat, Shenyang 110136, Liaoning, Peoples R China
[3] Nanchang Univ, Sch Informat Engn, Nanchang 330047, Jiangxi, Peoples R China
[4] Nanchang Univ, Jiangxi Prov Key Lab Intelligent Informat Syst, Nanchang 330047, Jiangxi, Peoples R China
来源
IEEE SYSTEMS JOURNAL | 2017年 / 11卷 / 03期
基金
中国国家自然科学基金; 英国工程与自然科学研究理事会;
关键词
Alignment; hand gesture segmentation; human-computer interaction (HCI); RGB-depth (RGB-D); CAMERA CALIBRATION; KINECT SENSOR; RECOGNITION; DEPTH;
D O I
10.1109/JSYST.2015.2468231
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a novel framework to segment hand gestures in RGB-depth (RGB-D) images captured by Kinect using humanlike approaches for human-robot interaction. The goal is to reduce the error of Kinect sensing and, consequently, to improve the precision of hand gesture segmentation for robot NAO. The proposed framework consists of two main novel approaches. First, the depth map and RGB image are aligned by using the genetic algorithm to estimate key points, and the alignment is robust to uncertainties of the extracted point numbers. Then, a novel approach is proposed to refine the edge of the tracked hand gestures in RGB images by applying a modified expectation-maximization (EM) algorithm based on Bayesian networks. The experimental results demonstrate that the proposed alignment method is capable of precisely matching the depth maps with RGB images, and the EM algorithm further effectively adjusts the RGB edges of the segmented hand gestures. The proposed framework has been integrated and validated in a system of human-robot interaction to improve NAO robot's performance of understanding and interpretation.
引用
收藏
页码:1326 / 1336
页数:11
相关论文
共 50 条
  • [31] Mutual Recognition in Human-Robot Interaction: a Deflationary Account
    Brinck I.
    Balkenius C.
    Philosophy & Technology, 2020, 33 (1) : 53 - 70
  • [32] Facial Emotion Expressions in Human-Robot Interaction: A Survey
    Rawal, Niyati
    Stock-Homburg, Ruth Maria
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2022, 14 (07) : 1583 - 1604
  • [33] Human-Robot Interaction by Understanding Upper Body Gestures
    Xiao, Yang
    Zhang, Zhijun
    Beck, Aryel
    Yuan, Junsong
    Thalmann, Daniel
    PRESENCE-VIRTUAL AND AUGMENTED REALITY, 2014, 23 (02): : 133 - 154
  • [34] Empathy in Human-Robot Interaction: Designing for Social Robots
    Park, Sung
    Whang, Mincheol
    INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, 2022, 19 (03)
  • [35] Human-Robot Interaction based on Facial Expression Imitation
    Esfandbod, Alireza
    Rokhi, Zeynab
    Taheri, Alireza
    Alemi, Minoo
    Meghdari, Ali
    2019 7TH INTERNATIONAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ICROM 2019), 2019, : 69 - 73
  • [36] A Multimodal Human-Robot Interaction Manager for Assistive Robots
    Abbasi, Bahareh
    Monaikul, Natawut
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6756 - 6762
  • [37] Human-robot interaction in Industry 4.0 based on an Internet of Things real-time gesture control system
    Roda-Sanchez, Luis
    Olivares, Teresa
    Garrido-Hidalgo, Celia
    Luis de la Vara, Jose
    Fernandez-Caballero, Antonio
    INTEGRATED COMPUTER-AIDED ENGINEERING, 2021, 28 (02) : 159 - 175
  • [38] Knowledge acquisition through human-robot multimodal interaction
    Randelli, Gabriele
    Bonanni, Taigo Maria
    Iocchi, Luca
    Nardi, Daniele
    INTELLIGENT SERVICE ROBOTICS, 2013, 6 (01) : 19 - 31
  • [39] Real Time Hand Gesture Recognition for Human Computer Interaction
    Agrawal, Rishabh
    Gupta, Nikita
    2016 IEEE 6TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING (IACC), 2016, : 470 - 475
  • [40] Towards Real-time Physical Human-Robot Interaction using Skeleton Information and Hand Gestures
    Mazhar, Osama
    Ramdani, Sofiane
    Navarro, Benjamin
    Passama, Robin
    Cherubini, Andrea
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 7336 - 7341