Transductive Zero-Shot Action Recognition by Word-Vector Embedding

被引:105
作者
Xu, Xun [1 ]
Hospedales, Timothy [1 ]
Gong, Shaogang [1 ]
机构
[1] Queen Mary Univ London, London, England
关键词
Zero-shot action recognition; Zero-shot learning; Semantic embedding; Semi-supervised learning; Transfer learning; Action recognition;
D O I
10.1007/s11263-016-0983-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The number of categories for action recognition is growing rapidly and it has become increasingly hard to label sufficient training data for learning conventional models for all categories. Instead of collecting ever more data and labelling them exhaustively for all categories, an attractive alternative approach is "zero-shot learning" (ZSL). To that end, in this study we construct a mapping between visual features and a semantic descriptor of each action category, allowing new categories to be recognised in the absence of any visual training data. Existing ZSL studies focus primarily on still images, and attribute-based semantic representations. In this work, we explore word-vectors as the shared semantic space to embed videos and category labels for ZSL action recognition. This is a more challenging problem than existing ZSL of still images and/or attributes, because the mapping between video space-time features of actions and the semantic space is more complex and harder to learn for the purpose of generalising over any cross-category domain shift. To solve this generalisation problem in ZSL action recognition, we investigate a series of synergistic strategies to improve upon the standard ZSL pipeline. Most of these strategies are transductive in nature which means access to testing data in the training phase. First, we enhance significantly the semantic space mapping by proposing manifold-regularized regression and data augmentation strategies. Second, we evaluate two existing post processing strategies (transductive self-training and hubness correction), and show that they are complementary. We evaluate extensively our model on a wide range of human action datasets including HMDB51, UCF101, Olympic Sports and event datasets including CCV and TRECVID MED 13. The results demonstrate that our approach achieves the state-of-the-art zero-shot action recognition performance with a simple and efficient pipeline, and without supervised annotation of attributes. Finally, we present in-depth analysis into why and when zero-shot works, including demonstrating the ability to predict cross-category transferability in advance.
引用
收藏
页码:309 / 333
页数:25
相关论文
共 50 条
[21]   Zero-Shot Visual Recognition via Bidirectional Latent Embedding [J].
Qian Wang ;
Ke Chen .
International Journal of Computer Vision, 2017, 124 :356-383
[22]   Learning discriminative visual semantic embedding for zero-shot recognition [J].
Xie, Yurui ;
Song, Tiecheng ;
Yuan, Jianying .
SIGNAL PROCESSING-IMAGE COMMUNICATION, 2023, 115
[23]   Zero-Shot Visual Recognition via Bidirectional Latent Embedding [J].
Wang, Qian ;
Chen, Ke .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2017, 124 (03) :356-383
[24]   MKTZ: multi-semantic embedding and key frame masking techniques for zero-shot skeleton action recognition [J].
Chen, Hongwei ;
Guo, Sheng ;
Chen, Zexi .
MULTIMEDIA SYSTEMS, 2024, 30 (06)
[25]   Global Semantic Descriptors for Zero-Shot Action Recognition [J].
Estevam, Valter ;
Laroca, Rayson ;
Pedrini, Helio ;
Menotti, David .
IEEE SIGNAL PROCESSING LETTERS, 2022, 29 :1843-1847
[26]   Generative External Knowledge for Zero-shot Action Recognition [J].
Zhou, Lijuan ;
Mao, Jianing ;
Xu, Xinhang ;
Li, Xinran ;
Meng, Xiang .
EXPERT SYSTEMS WITH APPLICATIONS, 2025, 279
[27]   Transductive Zero-Shot Hashing for Multilabel Image Retrieval [J].
Zou, Qin ;
Cao, Ling ;
Zhang, Zheng ;
Chen, Long ;
Wang, Song .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) :1673-1687
[28]   Zero-shot Action Recognition via Empirical Maximum Mean Discrepancy [J].
Tian, Yi ;
Ruan, Qiuqi ;
An, Gaoyun .
PROCEEDINGS OF 2018 14TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP), 2018, :489-492
[29]   Learning temporal information and object relation for zero-shot action recognition [J].
Qi, Qiuping ;
Wang, Hanli ;
Su, Taiyi ;
Liu, Xianhui .
DISPLAYS, 2022, 73
[30]   Zero-Shot Learning for Gesture Recognition [J].
Madapana, Naveen .
PROCEEDINGS OF THE 2020 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2020, 2020, :754-757