Rethinking Generative Zero-Shot Learning: An Ensemble Learning Perspective for Recognising Visual Patches

被引:23
作者
Chen, Zhi [1 ]
Wang, Sen [1 ]
Li, Jingjing [2 ]
Huang, Zi [1 ]
机构
[1] Univ Queensland, Brisbane, Qld, Australia
[2] Univ Elect Sci & Technol China, Chengdu, Sichuan, Peoples R China
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
关键词
generative zero-shot Learning; fine-grained classification;
D O I
10.1145/3394171.3413813
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Zero-shot learning (ZSL) is commonly used to address the very pervasive problem of predicting unseen classes in fine-grained image classification and other tasks. One family of solutions is to learn synthesised unseen visual samples produced by generative models from auxiliary semantic information, such as natural language descriptions. However, for most of these models, performance suffers from noise in the form of irrelevant image backgrounds. Further, most methods do not allocate a calculated weight to each semantic patch. Yet, in the real world, the discriminative power of features can be quantified and directly leveraged to improve accuracy and reduce computational complexity. To address these issues, we propose a novel framework called multi-patch generative adversarial nets (MPGAN) that synthesises local patch features and labels unseen classes with a novel weighted voting strategy. The process begins by generating discriminative visual features from noisy text descriptions for a set of predefined local patches using multiple specialist generative models. The features synthesised from each patch for unseen classes are then used to construct an ensemble of diverse supervised classifiers, each corresponding to one local patch. A voting strategy averages the probability distributions output from the classifiers and, given that some patches are more discriminative than others, a discrimination-based attention mechanism helps to weight each patch accordingly. Extensive experiments show that MPGAN has significantly greater accuracy than state-of-the-art methods.
引用
收藏
页码:3413 / 3421
页数:9
相关论文
共 43 条
[1]   Multi-Cue Zero-Shot Learning with Strong Supervision [J].
Akata, Zeynep ;
Malinowski, Mateusz ;
Fritz, Mario ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :59-68
[2]   Label-Embedding for Image Classification [J].
Akata, Zeynep ;
Perronnin, Florent ;
Harchaoui, Zaid ;
Schmid, Cordelia .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (07) :1425-1438
[3]  
Akata Z, 2015, PROC CVPR IEEE, P2927, DOI 10.1109/CVPR.2015.7298911
[4]   Label-Embedding for Attribute-Based Classification [J].
Akata, Zeynep ;
Perronnin, Florent ;
Harchaoui, Zaid ;
Schmid, Cordelia .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :819-826
[5]  
[Anonymous], 2019, NEURLPS
[6]   Synthesized Classifiers for Zero-Shot Learning [J].
Changpinyo, Soravit ;
Chao, Wei-Lun ;
Gong, Boqing ;
Sha, Fei .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :5327-5336
[7]  
Chen Z, 2020, IEEE WINT CONF APPL, P863, DOI [10.1109/WACV45572.2020.9093610, 10.1109/wacv45572.2020.9093610]
[8]   CYCLE-CONSISTENT DIVERSE IMAGE SYNTHESIS FROM NATURAL LANGUAGE [J].
Chen, Zhi ;
Luo, Yadan .
2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, :459-464
[9]   Link the head to the "beak": Zero Shot Learning from Noisy Text Description at Part Precision [J].
Elhoseiny, Mohamed ;
Zhu, Yizhe ;
Zhang, Han ;
Elgammal, Ahmed .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6288-6297
[10]   Creativity Inspired Zero-Shot Learning [J].
Elhoseiny, Mohamed ;
Elfeki, Mohamed .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :5783-5792