Human activity recognition for efficient human-robot collaboration

被引:7
|
作者
Zhdanova, M. [1 ]
Voronin, V. [1 ]
Semenishchev, E. [1 ]
Ilyukhin, Yu [1 ]
Zelensky, A. [1 ]
机构
[1] Moscow State Univ Technol STANKIN, Ctr Cognit Technol & Machine Vis, Vadkovsky Line 1, Moscow 127055, Russia
关键词
action recognition; human activity; descriptor; machine vision systems; human-robot collaboration;
D O I
10.1117/12.2574133
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A crucial technology in modern smart manufacturing is the human-robot collaboration (HRC) concept. In the HRC, operators, and robots unite and collaborate to perform complex tasks in a variety of scenarios, heterogeneous and dynamic conditions. A unique role in the implementation of the HRC model, as a means of sensation, is assigned to machine vision systems. It provides the receipt and processing of visual information about the environment, the analysis of images of the working area, the transfer of this information to the control system, and decision-making within the framework of the task. Thus, the task of recognizing the actions of a human-operator for the development of a robot control system in order to implement an effective HRC system becomes relevant. The operator commands fed to the robot can have a variety of forms: from simple and concrete to quite abstract. This introduces several difficulties when the implementation of automated recognition systems in real conditions; this is a heterogeneous background, an uncontrolled work environment, irregular lighting, etc. In the article, we present an algorithm for constructing a video descriptor and solve the problem of classifying a set of actions into predefined classes. The proposed algorithm is based on capturing three-dimensional sub-volumes located inside a video sequence patch and calculating the difference in intensities between these sub-volumes. Video patches and central coordinates of sub-volumes are built on the principle of VLBP. Such a representation of three-dimensional blocks (patches) of a video sequence by capturing sub-volumes, inside each patch, in several scales and orientations, leads to an informative description of the scene and the actions taking place in it. Experimental results showed the effectiveness of the proposed algorithm on known data sets.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] An Efficient Human Activity Recognition Framework Based on Graph Convolutional Network for Human-Robot Collaboration
    Liu, Wenzhe
    Liu, Zhaowei
    Su, Hang
    2024 WRC SYMPOSIUM ON ADVANCED ROBOTICS AND AUTOMATION, WRC SARA, 2024, : 243 - 248
  • [2] A Model-Based Human Activity Recognition for Human-Robot Collaboration
    Lee, Sang Uk
    Hofmann, Andreas
    Williams, Brian
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 736 - 743
  • [3] Task-Based Control and Human Activity Recognition for Human-Robot Collaboration
    Uzunovic, Tarik
    Golubovic, Edin
    Tucakovi, Zlatan
    Acikmese, Yasin
    Sabanovic, Asif
    IECON 2018 - 44TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2018, : 5110 - 5115
  • [4] Gesture recognition for human-robot collaboration: A review
    Liu, Hongyi
    Wang, Lihui
    INTERNATIONAL JOURNAL OF INDUSTRIAL ERGONOMICS, 2018, 68 : 355 - 367
  • [5] Anticipatory Robot Control for Efficient Human-Robot Collaboration
    Huang, Chien-Ming
    Mutlu, Bilge
    ELEVENTH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN ROBOT INTERACTION (HRI'16), 2016, : 83 - 90
  • [6] Explainable AI-Enhanced Human Activity Recognition for Human-Robot Collaboration in Agriculture
    Benos, Lefteris
    Tsaopoulos, Dimitrios
    Tagarakis, Aristotelis C.
    Kateris, Dimitrios
    Busato, Patrizia
    Bochtis, Dionysis
    APPLIED SCIENCES-BASEL, 2025, 15 (02):
  • [7] Multi-Camera-Based Human Activity Recognition for Human-Robot Collaboration in Construction
    Jang, Youjin
    Jeong, Inbae
    Heravi, Moein Younesi
    Sarkar, Sajib
    Shin, Hyunkyu
    Ahn, Yonghan
    SENSORS, 2023, 23 (15)
  • [8] Efficient behavior learning in human-robot collaboration
    Munzer, Thibaut
    Toussaint, Marc
    Lopes, Manuel
    AUTONOMOUS ROBOTS, 2018, 42 (05) : 1103 - 1115
  • [9] Towards Efficient Human-Robot Collaboration With Robust Plan Recognition and Trajectory Prediction
    Cheng, Yujiao
    Sun, Liting
    Liu, Changliu
    Tomizuka, Masayoshi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (02): : 2602 - 2609
  • [10] Weakly-Supervised Learning for Multimodal Human Activity Recognition in Human-Robot Collaboration Scenarios
    Pohlt, Clemens
    Schlegl, Thomas
    Wachsmuth, Sven
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 8381 - 8386