Learning SpatioTemporal and Motion Features in a Unified 2D Network for Action Recognition

被引:22
作者
Wang, Mengmeng [1 ]
Xing, Jiazheng [1 ]
Su, Jing [2 ]
Chen, Jun [1 ]
Liu, Yong [1 ]
机构
[1] Zhejiang Univ, Coll Control Sci & Engn, Lab Adv Percept Robot & Intelligent Learning, Hangzhou 310027, Zhejiang, Peoples R China
[2] Fudan Univ, Dept Opt Sci & Engn, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Action recognition; frequency illustration; motion features; spatiotemporal features; twins training framework; REPRESENTATION;
D O I
10.1109/TPAMI.2022.3173658
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent methods for action recognition always apply 3D Convolutional Neural Networks (CNNs) to extract spatiotemporal features and introduce optical flows to present motion features. Although achieving state-of-the-art performance, they are expensive in both time and space. In this paper, we propose to represent both two kinds of features in a unified 2D CNN without any 3D convolution or optical flows calculation. In particular, we first design a channel-wise spatiotemporal module to present the spatiotemporal features and a channel-wise motion module to encode feature-level motion features efficiently. Besides, we provide a distinctive illustration of the two modules from the frequency domain by interpreting them as advanced and learnable versions of frequency components. Second, we combine these two modules and an identity mapping path into one united block that can easily replace the original residual block in the ResNet architecture, forming a simple yet effective network dubbed STM network by introducing very limited extra computation cost and parameters. Third, we propose a novel Twins Training framework for action recognition by incorporating a correlation loss to optimize the inter-class and intra-class correlation and a siamese structure to fully stretch the training data. We extensively validate the proposed STM on both temporal-related datasets (i.e., Something-Something v1 & v2) and scene-related datasets (i.e., Kinetics-400, UCF-101, and HMDB-51). It achieves favorable results against state-of-the-art methods in all the datasets.
引用
收藏
页码:3347 / 3362
页数:16
相关论文
共 50 条
[41]   A Spatiotemporal Heterogeneous Two-Stream Network for Action Recognition [J].
Chen, Enqing ;
Bai, Xue ;
Gao, Lei ;
Tinega, Haron Chweya ;
Ding, Yingqiang .
IEEE ACCESS, 2019, 7 :57267-57275
[42]   Feature Fusion of Deep Spatial Features and Handcrafted Spatiotemporal Features for Human Action Recognition [J].
Uddin, Md Azher ;
Lee, Young-Koo .
SENSORS, 2019, 19 (07)
[43]   2D Log-Gabor Wavelet Based Action Recognition [J].
Li, Ning ;
Xu, De .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2009, E92D (11) :2275-2278
[44]   2D Human Skeleton Action Recognition Based on Depth Estimation [J].
Wang, Lei ;
Yang, Shanmin ;
Zhang, Jianwei ;
Gu, Song .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (07) :869-877
[45]   Learning Spatiotemporal-Selected Representations in Videos for Action Recognition [J].
Zhang, Jiachao ;
Tong, Ying ;
Jiao, Liangbao .
JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2023, 32 (12)
[46]   Adaptive spatiotemporal graph convolutional network with intermediate aggregation of multi-stream skeleton features for action recognition [J].
Zhao, Yukai ;
Wang, Jingwei ;
Wang, Han ;
Liu, Min ;
Ma, Yunlong .
NEUROCOMPUTING, 2022, 505 :116-124
[47]   Learning Attention-Enhanced Spatiotemporal Representation for Action Recognition [J].
Shi, Zhensheng ;
Cao, Liangjie ;
Guan, Cheng ;
Zheng, Haiyong ;
Gu, Zhaorui ;
Yu, Zhibin ;
Zheng, Bing .
IEEE ACCESS, 2020, 8 (08) :16785-16794
[48]   Human 3D model-based 2D action recognition [J].
Gu J.-X. ;
Ding X.-Q. ;
Wang S.-J. .
Zidonghua Xuebao/ Acta Automatica Sinica, 2010, 36 (01) :46-53
[49]   UNSUPERVISED MOTION REPRESENTATION ENHANCED NETWORK FOR ACTION RECOGNITION [J].
Yang, Xiaohang ;
Kong, Lingtong ;
Yang, Jie .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :2445-2449
[50]   Differential motion attention network for efficient action recognition [J].
Liu, Caifeng ;
Gu, Fangjie .
VISUAL COMPUTER, 2025, 41 (03) :1719-1731