Human action recognition with transformer based on convolutional features

被引:4
作者
Shi, Chengcheng [1 ]
Liu, Shuxin [1 ]
机构
[1] Shanghai Dianji Univ, Sch Elect Engn, 300 Shuihua Rd,Pudong New Area, Shanghai 201306, Peoples R China
来源
INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS | 2024年 / 18卷 / 02期
关键词
Human action recognition; convolutional features; pose estimation; transformer; NETWORK;
D O I
10.3233/IDT-240159
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As one of the key research directions in the field of computer vision, human action recognition has a wide range of practical application values and prospects. In the fields of video surveillance, human-computer interaction, sports analysis, and healthcare, human action recognition technology shows a broad application prospect and potential. However, the diversity and complexity of human actions bring many challenges, such as handling complex actions, distinguishing similar actions, coping with changes in viewing angle, and overcoming occlusion problems. To address the challenges, this paper proposes an innovative framework for human action recognition. The framework combines the latest pose estimation algorithms, pre-trained CNN models, and a Vision Transformer to build an efficient system. The first step involves utilizing the latest pose estimation algorithm to accurately extract human pose information from real RGB image frames. Then, a pre-trained CNN model is used to perform feature extraction on the extracted pose information. Finally, the Vision Transformer model is applied for fusion and classification operations on the extracted features. Experimental validation is conducted on two benchmark datasets, UCF 50 and UCF 101, to demonstrate the effectiveness and efficiency of the proposed framework. The applicability and limitations of the framework in different scenarios are further explored through quantitative and qualitative experiments, providing valuable insights and inspiration for future research.
引用
收藏
页码:881 / 896
页数:16
相关论文
共 49 条
[41]  
Tan MX, 2019, PR MACH LEARN RES, V97
[42]   Video classification with Densely extracted HOG/HOF/MBH features: an evaluation of the accuracy/computational efficiency trade-off [J].
Uijlings, J. ;
Duta, I. C. ;
Sangineto, E. ;
Sebe, Nicu .
INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2015, 4 (01) :33-44
[43]   Suspicious activity detection using deep learning in secure assisted living IoT environments [J].
Vallathan, G. ;
John, A. ;
Thirumalai, Chandrasegar ;
Mohan, SenthilKumar ;
Srivastava, Gautam ;
Lin, Jerry Chun-Wei .
JOURNAL OF SUPERCOMPUTING, 2021, 77 (04) :3242-3260
[44]   Long-Term Temporal Convolutions for Action Recognition [J].
Varol, Gul ;
Laptev, Ivan ;
Schmid, Cordelia .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (06) :1510-1517
[45]  
Vaswani A, 2017, ADV NEUR IN, V30
[46]   The security of vulnerable senior citizens through dynamically sensed signal acquisition [J].
Wang, Xuanming ;
Srivastava, Gautam .
TRANSACTIONS ON EMERGING TELECOMMUNICATIONS TECHNOLOGIES, 2022, 33 (10)
[47]   A Comprehensive Survey on Graph Neural Networks [J].
Wu, Zonghan ;
Pan, Shirui ;
Chen, Fengwen ;
Long, Guodong ;
Zhang, Chengqi ;
Yu, Philip S. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (01) :4-24
[48]  
Yan SJ, 2018, AAAI CONF ARTIF INTE, P7444
[49]  
Zagoruyko S, 2017, Arxiv, DOI arXiv:1605.07146