Enhancing Zero-Shot Action Recognition in Videos by Combining GANs with Text and Images

被引:2
作者
Huang K. [1 ]
Miralles-Pechuán L. [1 ]
Mckeever S. [1 ]
机构
[1] School of Computing, Technological University Dublin, Central Quad, Grangegorman, Dublin
关键词
Generative adversarial networks; Human action recognition; Semantic knowledge source; Zero-shot learning;
D O I
10.1007/s42979-023-01803-3
中图分类号
学科分类号
摘要
Zero-shot action recognition (ZSAR) tackles the problem of recognising actions that have not been seen by the model during the training phase. Various techniques have been used to achieve ZSAR in the field of human action recognition (HAR) in videos. Techniques based on generative adversarial networks (GANs) are the most promising in terms of performance. GANs are trained to generate representations of unseen videos conditioned on information related to the unseen classes, such as class label embeddings. In this paper, we present an approach based on combining information from two different GANs, both of which generate a visual representation of unseen classes. Our dual-GAN approach leverages two separate knowledge sources related to the unseen classes: class-label texts and images related to the class label obtained from Google Images. The generated visual embeddings of the unseen classes by the two GANs are merged and used to train a classifier in a supervised-learning fashion for ZSAR classification. Our methodology is based on the idea that using more and richer knowledge sources to generate unseen classes representations will lead to higher downstream accuracy when classifying unseen classes. The experimental results show that our dual-GAN approach outperforms state-of-the-art methods on the two benchmark HAR datasets: HMDB51 and UCF101. Additionally, we present a comprehensive discussion and analysis of the experimental results for both datasets to understand the nuances of each approach at a class level. Finally, we examine the impact of the number of visual embeddings generated by the two GANs on the accuracy of the models. © 2023, The Author(s).
引用
收藏
相关论文
共 45 条
[1]  
Sahoo S.P., Ari S., Mahapatra K., Mohanty S.P., HAR-depth: A novel framework for human action recognition using sequential learning and depth estimated history images, IEEE Transactions on Emerging Topics in Computational Intelligence, 2020
[2]  
Ponce H., Martinez-Villasenor M.D.L., Miralles-Pechuan L., A novel wearable sensor-based human activity recognition approach using artificial hydrocarbon networks, Sensors, 16, 7, (2016)
[3]  
Wang H., Schmid C., Action recognition with improved trajectories, Proceedings of IEEE ICCV, pp. 3551-3558, (2013)
[4]  
Karpathy A., Toderici G., Shetty S., Leung T., Sukthankar R., Fei-Fei L., Large-scale video classification with convolutional neural networks, Proceedings of IEEE CVPR, pp. 1725-1732, (2014)
[5]  
Simonyan K., Zisserman A., Two-stream convolutional networks for action recognition in videos, Advances in NIPS, pp. 568-576, (2014)
[6]  
Feichtenhofer C., Pinz A., Zisserman A., Convolutional two-stream network fusion for video action recognition, Proceedings of CVPR, pp. 1933-1941, (2016)
[7]  
Hara K., Kataoka H., Satoh Y., Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?, Proceedings of CVPR, pp. 6546-6555, (2018)
[8]  
Khan M.A., Zhang Y.D., Khan S.A., Attique M., Rehman A., Seo S., A resource conscious human action recognition framework using 26-layered deep convolutional neural network, Multimed. Tools Appl., 80, 28, pp. 35827-35849, (2021)
[9]  
Liu J., Kuipers B., Savarese S., Recognizing human actions by attributes, CVPR, 2011, pp. 3337-3344, (2011)
[10]  
Jain M., van Gemert J.C., Mensink T., Snoek C.G., Objects2action: Classifying and localizing actions without any video example, Proceedings of the IEEE International Conference on Computer Vision, pp. 4588-4596, (2015)