Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification

被引:180
作者
Narayan, Sanath [1 ]
Gupta, Akshita [1 ]
Khan, Fahad Shahbaz [1 ,3 ]
Snoek, Cees G. M. [2 ]
Shao, Ling [1 ,3 ]
机构
[1] Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
[2] Univ Amsterdam, Amsterdam, Netherlands
[3] Mohamed Bin Zayed Univ Artificial Intelligence, Abu Dhabi, U Arab Emirates
来源
COMPUTER VISION - ECCV 2020, PT XXII | 2020年 / 12367卷
关键词
Generalized zero-shot classification; Feature synthesis;
D O I
10.1007/978-3-030-58542-6_29
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Zero-shot learning strives to classify unseen categories for which no data is available during training. In the generalized variant, the test samples can further belong to seen or unseen categories. The state-of-the-art relies on Generative Adversarial Networks that synthesize unseen class features by leveraging class-specific semantic embeddings. During training, they generate semantically consistent features, but discard this constraint during feature synthesis and classification. We propose to enforce semantic consistency at all stages of (generalized) zero-shot learning: training, feature synthesis and classification. We first introduce a feedback loop, from a semantic embedding decoder, that iteratively refines the generated features during both the training and feature synthesis stages. The synthesized features together with their corresponding latent embeddings from the decoder are then transformed into discriminative features and utilized during classification to reduce ambiguities among categories. Experiments on (generalized) zero-shot object and action classification reveal the benefit of semantic consistency and iterative feedback, outperforming existing methods on six zero-shot learning benchmarks. Source code at https://github.com/akshitac8/tfvaegan.
引用
收藏
页码:479 / 495
页数:17
相关论文
共 50 条
[1]   Label-Embedding for Image Classification [J].
Akata, Zeynep ;
Perronnin, Florent ;
Harchaoui, Zaid ;
Schmid, Cordelia .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (07) :1425-1438
[2]  
Arjovsky M., 2017, arXiv, DOI 10.48550/arXiv.1701.04862
[3]  
Arjovsky M, 2017, Arxiv, DOI [arXiv:1701.07875, 10.48550/arXiv.1701.07875]
[4]  
Heilbron FC, 2015, PROC CVPR IEEE, P961, DOI 10.1109/CVPR.2015.7298698
[5]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[6]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[7]  
Dosovitskiy Alexey, 2016, Advances in neural information processing systems, V29
[8]   Multi-modal Cycle-Consistent Generalized Zero-Shot Learning [J].
Felix, Rafael ;
Kumar, B. G. Vijay ;
Reid, Ian ;
Carneiro, Gustavo .
COMPUTER VISION - ECCV 2018, PT VI, 2018, 11210 :21-37
[9]  
Frome A., 2013, Advances in neural information processing systems
[10]   Transductive Multi-View Zero-Shot Learning [J].
Fu, Yanwei ;
Hospedales, Timothy M. ;
Xiang, Tao ;
Gong, Shaogang .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (11) :2332-2345