On the Importance of Spatial Relations for Few-shot Action Recognition

被引:3
作者
Zhang, Yilun [1 ]
Fu, Yuqian [1 ]
Ma, Xingjun [1 ]
Qi, Lizhe [2 ]
Chen, Jingjing [1 ]
Wu, Zuxuan [1 ]
Jiang, Yu-Gang [1 ]
机构
[1] Fudan Univ, Sch CS, Shanghai Key Lab Intel Info Proc, Shanghai, Peoples R China
[2] Fudan Univ, Acad Engn & Technol, Shanghai, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
关键词
Action recognition; Few-shot learning; Meta learning; Cross attention; Spatial relation alignment;
D O I
10.1145/3581783.3612192
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has achieved great success in video recognition, yet still struggles to recognize novel actions when faced with only a few examples. To tackle this challenge, few-shot action recognition methods have been proposed to transfer knowledge from a source dataset to a novel target dataset with only one or a few labeled videos. However, existing methods mainly focus on modeling the temporal relations between the query and support videos while ignoring the spatial relations. In this paper, we find that the spatial misalignment between objects also occurs in videos, notably more common than the temporal inconsistency. We are thus motivated to investigate the importance of spatial relations and propose a more accurate few-shot action recognition method that leverages both spatial and temporal information. Particularly, a novel Spatial Alignment Cross Transformer (SA-CT) which learns to re-adjust the spatial relations and incorporates the temporal information is contributed. Experiments reveal that, even without using any temporal information, the performance of SA-CT is comparable to temporal based methods on 3/4 benchmarks. To further incorporate the temporal information, we propose a simple yet effective Temporal Mixer module. The Temporal Mixer enhances the video representation and improves the performance of the full SA-CT model, achieving very competitive results. In this work, we also exploit large-scale pretrained models for few-shot action recognition, providing useful insights for this research direction.
引用
收藏
页码:2243 / 2251
页数:9
相关论文
共 53 条
  • [1] [Anonymous], 2019, NEURIPS
  • [2] [Anonymous], 2015, ICCV
  • [3] Ba J. L., 2016, Layer Normalization
  • [4] Bishay Mina, 2019, BMVC
  • [5] Cao KD, 2020, PROC CVPR IEEE, P10615, DOI 10.1109/CVPR42600.2020.01063
  • [6] Caron Mathilde, 2021, ICCV
  • [7] Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
    Carreira, Joao
    Zisserman, Andrew
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4724 - 4733
  • [8] Chu X., 2021, ARXIV210210882
  • [9] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [10] Doersch C., 2020, NEURIPS, V33, P21981