Self-supervised Representation Learning for Fine Grained Human Hand Action Recognition in Industrial Assembly Lines

被引:2
作者
Sturm, Fabian [1 ,2 ]
Sathiyababu, Rahul [1 ]
Allipilli, Harshitha [1 ]
Hergenroether, Elke [2 ]
Siegel, Melanie [2 ]
机构
[1] Bosch Rexroth AG, Lise Meitner Str 4, D-89081 Ulm, Germany
[2] Univ Appl Sci Darmstadt, Schoefferstr 3, D-64295 Darmstadt, Germany
来源
ADVANCES IN VISUAL COMPUTING, ISVC 2023, PT I | 2023年 / 14361卷
关键词
Self-Supervised Learning; Human Action Recognition; Industrial Vision;
D O I
10.1007/978-3-031-47969-4_14
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Humans are still indispensable on industrial assembly lines, but in the event of an error, they need support from intelligent systems. In addition to the objects to be observed, it is equally important to understand the fine-grained hand movements of a human to be able to track the entire process. However, these deep learning based hand action recognition methods are very label intensive, which cannot be offered by all industrial companies due to the associated costs. This work therefore presents a self-supervised learning approach for industrial assembly processes that allows a spatio-temporal transformer architecture to be pre-trained on a variety of information from real-world video footage of daily life. Subsequently, this deep learning model is adapted to the industrial assembly task at hand using only a few labels. It is shown which known real-world datasets are best suited for representation learning of these hand actions in a regression task, and to what extent they optimize the subsequent supervised trained classification task.
引用
收藏
页码:172 / 184
页数:13
相关论文
共 50 条
[21]   Self-supervised Detransformation Autoencoder for Representation Learning in Open Set Recognition [J].
Jia, Jingyun ;
Chan, Philip K. .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 :471-483
[22]   Hierarchically Self-supervised Transformer for Human Skeleton Representation Learning [J].
Chen, Yuxiao ;
Zhao, Long ;
Yuan, Jianbo ;
Tian, Yu ;
Xia, Zhaoyang ;
Geng, Shijie ;
Han, Ligong ;
Metaxas, Dimitris N. .
COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 :185-202
[23]   SSRL: Self-Supervised Spatial-Temporal Representation Learning for 3D Action Recognition [J].
Jin, Zhihao ;
Wang, Yifan ;
Wang, Qicong ;
Shen, Yehu ;
Meng, Hongying .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (01) :274-285
[24]   Self-Supervised Human Activity Recognition With Localized Time-Frequency Contrastive Representation Learning [J].
Taghanaki, Setareh Rahimi ;
Rainbow, Michael ;
Etemad, Ali .
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2023, 53 (06) :1027-1037
[25]   Self-Supervised Learning for Temporal Action Segmentation in Industrial and Manufacturing Videos [J].
Vybornova, Yuliya ;
Aleshin, Maxim ;
Illarionova, Svetlana ;
Novikov, Ilya ;
Shadrin, Dmitrii ;
Nikonorov, Artem ;
Burnaev, Evgeny .
IEEE ACCESS, 2025, 13 :39650-39665
[26]   ColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition [J].
Jain, Yash ;
Tang, Chi Ian ;
Min, Chulhong ;
Kawsar, Fahim ;
Mathur, Akhil .
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2022, 6 (01)
[27]   Fine-Grained Self-Supervised Learning with Jigsaw puzzles for medical image classification [J].
Park W. ;
Ryu J. .
Computers in Biology and Medicine, 2024, 174
[28]   Motion Guided Attention Learning for Self-Supervised 3D Human Action Recognition [J].
Yang, Yang ;
Liu, Guangjun ;
Gao, Xuehao .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (12) :8623-8634
[29]   Self-Supervised Federated Learning for Personalized Human Activity Recognition [J].
Deng, Shizhuo ;
Teng, Da ;
Guo, Zhubao ;
Chen, Jiaqi ;
Chen, Dongyue ;
Jia, Tong ;
Wang, Hao .
2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME 2024, 2024,
[30]   Cross-Model Cross-Stream Learning for Self-Supervised Human Action Recognition [J].
Liu, Mengyuan ;
Liu, Hong ;
Guo, Tianyu .
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2024, 54 (06) :743-752