Transfer-Learning-Based Gesture and Pose Recognition System for HumanRobot Interaction: An Internet of Things Application

被引:0
作者
Kuo, Ping-Huan [1 ,2 ]
Shen, Yu-Chi [1 ]
Feng, Po-Hsun [1 ]
Chiu, Yu-Jhih [1 ]
Yau, Her-Terng [1 ,2 ]
机构
[1] Natl Chung Cheng Univ, Dept Mech Engn, Chiayi 62102, Taiwan
[2] Natl Chung Cheng Univ, Adv Inst Mfg High Tech Innovat, Chiayi 62102, Taiwan
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 21期
关键词
Hand tracking; image recognition; posture estimation; transfer learning;
D O I
10.1109/JIOT.2024.3436584
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human-machine interactions have become increasingly crucial in the current era of the Internet of Things (IoT). Mutual feedback is critical for adjusting machine operations to improve the efficiency of human-machine interactions. Imaging can be easily conducted in various contexts to acquire large volumes of visual information, such as that regarding human gestures. In the present study, machine learning technology, which is the driving technology for intelligent processing in IoT systems, was adopted to develop a system for identifying and classifying six hand gestures and five body poses. Gesture and pose data were collected and analyzed using multiple algorithms to construct classification models for pose recognition. Data for one individual were used to train base gesture and pose recognition models, and transfer learning was then performed to adapt these base models to the gesture and pose data of other individuals. The adapted models achieved satisfactory recognition accuracy. The developed gesture and pose recognition models were tested by employing them to control a robotic arm and an automated guided vehicle, respectively. All models achieved accuracy rates of >97%, thereby confirming the effectiveness of the proposed machine-learning-based method for gesture and pose recognition.
引用
收藏
页码:35376 / 35389
页数:14
相关论文
共 21 条
  • [1] Implementation of a Character Recognition System Based on Finger-Joint Tracking Using a Depth Camera
    Alam, Md. Shahinur
    Kwon, Ki-Chul
    Kim, Nam
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2021, 51 (03) : 229 - 241
  • [2] [Anonymous], "Pose landmark detection guide
  • [3] [Anonymous], "Hand landmarks detection guide
  • [4] Recurrent 3D Hand Pose Estimation Using Cascaded Pose-Guided 3D Alignments
    Deng, Xiaoming
    Zuo, Dexin
    Zhang, Yinda
    Cui, Zhaopeng
    Cheng, Jian
    Tan, Ping
    Chang, Liang
    Pollefeys, Marc
    Fanello, Sean
    Wang, Hongan
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) : 932 - 945
  • [5] Natural Human-Robot Interface Using Adaptive Tracking System with the Unscented Kalman Filter
    Du, Guanglong
    Yao, Gengcheng
    Li, Chunquan
    Liu, Peter X.
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2020, 50 (01) : 42 - 54
  • [6] The dynamic window approach to collision avoidance
    Fox, D
    Burgard, W
    Thrun, S
    [J]. IEEE ROBOTICS & AUTOMATION MAGAZINE, 1997, 4 (01) : 23 - 33
  • [7] CrowdHMT: Crowd Intelligence With the Deep Fusion of Human, Machine, and IoT
    Guo, Bin
    Liu, Yan
    Liu, Sicong
    Yu, Zhiwen
    Zhou, Xingshe
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (24) : 24822 - 24842
  • [8] A Multi-Stream Sequence Learning Framework for Human Interaction Recognition
    Haroon, Umair
    Ullah, Amin
    Hussain, Tanveer
    Ullah, Waseem
    Sajjad, Muhammad
    Muhammad, Khan
    Lee, Mi Young
    Baik, Sung Wook
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2022, 52 (03) : 435 - 444
  • [9] The Duo of Artificial Intelligence and Big Data for Industry 4.0: Applications, Techniques, Challenges, and Future Research Directions
    Jagatheesaperumal, Senthil Kumar
    Rahouti, Mohamed
    Ahmad, Kashif
    Al-Fuqaha, Ala
    Guizani, Mohsen
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (15) : 12861 - 12885
  • [10] Video-Based Automatic Wrist Flexion and Extension Classification
    Lee, Cheng-Hsien
    Hu, Yu Hen
    Bao, Stephen
    Radwin, Robert G.
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2022, 52 (05) : 824 - 832