Hybrid attentive prototypical network for few-shot action recognition

被引:3
作者
Ruan, Zanxi [1 ]
Wei, Yingmei [1 ]
Guo, Yanming [1 ]
Xie, Yuxiang [1 ]
机构
[1] Natl Univ Def Technol, Lab Big Data & Decis, Changsha, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Few-shot action recognition; Few-shot learning; Video understanding; Metric learning;
D O I
10.1007/s40747-024-01571-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most previous few-shot action recognition works tend to process video temporal and spatial features separately, resulting in insufficient extraction of comprehensive features. In this paper, a novel hybrid attentive prototypical network (HAPN) framework for few-shot action recognition is proposed. Distinguished by its joint processing of temporal and spatial information, the HAPN framework strategically manipulates these dimensions from feature extraction to the attention module, consequently enhancing its ability to perform action recognition tasks. Our framework utilizes the R(2+1)D backbone network, coupling the extraction of integrated temporal and spatial features to ensure a comprehensive understanding of video content. Additionally, our framework introduces the novel Residual Tri-dimensional Attention (ResTriDA) mechanism, specifically designed to augment feature information across the temporal, spatial, and channel dimensions. ResTriDA dynamically enhances crucial aspects of video features by amplifying significant channel-wise features for action distinction, accentuating spatial details vital for capturing the essence of actions within frames, and emphasizing temporal dynamics to capture movement over time. We further propose a prototypical attentive matching module (PAM) built on the concept of metric learning to resolve the overfitting issue common in few-shot tasks. We evaluate our HAPN framework on three classical few-shot action recognition datasets: Kinetics-100, UCF101, and HMDB51. The results indicate that our framework significantly outperformed state-of-the-art methods. Notably, the 1-shot task, demonstrated an increase of 9.8% in accuracy on UCF101 and improvements of 3.9% on HMDB51 and 12.4% on Kinetics-100. These gains confirm the robustness and effectiveness of our approach in leveraging limited data for precise action recognition.
引用
收藏
页码:8249 / 8272
页数:24
相关论文
共 45 条
[1]  
Bertasius G, 2021, PR MACH LEARN RES, V139
[2]   A practical study of active disturbance rejection control for rotary flexible joint robot manipulator [J].
Bilal, Hazrat ;
Yin, Baoqun ;
Aslam, Muhammad Shamrooz ;
Anjum, Zeeshan ;
Rohra, Avinash ;
Wang, Yizhen .
SOFT COMPUTING, 2023, 27 (08) :4987-5001
[3]   Jerk-bounded trajectory planning for rotary flexible joint manipulator: an experimental approach [J].
Bilal, Hazrat ;
Yin, Baoqun ;
Kumar, Aakash ;
Ali, Munawar ;
Zhang, Jing ;
Yao, Jinfa .
SOFT COMPUTING, 2023, 27 (07) :4029-4039
[4]  
Bilal H, 2017, CHIN CONTR CONF, P4192, DOI 10.23919/ChiCC.2017.8028015
[5]  
Bishay M, 2019, PREPRINT
[6]  
Brown T., Advances in neural information processing systems, V33, P1877
[7]  
Heilbron FC, 2015, PROC CVPR IEEE, P961, DOI 10.1109/CVPR.2015.7298698
[8]   Few-Shot Video Classification via Temporal Alignment [J].
Cao, Kaidi ;
Ji, Jingwei ;
Cao, Zhangjie ;
Chang, Chien-Yi ;
Niebles, Juan Carlos .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :10615-10624
[9]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[10]   A tutorial on the cross-entropy method [J].
De Boer, PT ;
Kroese, DP ;
Mannor, S ;
Rubinstein, RY .
ANNALS OF OPERATIONS RESEARCH, 2005, 134 (01) :19-67