Content-Attribute Disentanglement for Generalized Zero-Shot Learning

被引:2
作者
An, Yoojin [1 ]
Kim, Sangyeon [2 ]
Liang, Yuxuan [3 ]
Zimmermann, Roger [3 ]
Kim, Dongho [4 ]
Kim, Jihie [1 ]
机构
[1] Dongguk Univ, Dept Artificial Intelligence, Seoul 04620, South Korea
[2] Naver Webtoon AI, Seongnam 13529, South Korea
[3] Natl Univ Singapore, Sch Comp, Singapore 119077, Singapore
[4] Dongguk Univ, Dongguk Inst Convergence Educ, Seoul 04620, South Korea
关键词
Visualization; Prototypes; Feature extraction; Codes; Training; Semantics; Task analysis; Computer vision; deep learning; disentangled representation; generalized zero-shot learning;
D O I
10.1109/ACCESS.2022.3178800
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Humans can recognize or infer unseen classes of objects using descriptions explaining the characteristics (semantic information) of the classes. However, conventional deep learning models trained in a supervised manner cannot classify classes that were unseen during training. Hence, many studies have been conducted into generalized zero-shot learning (GZSL), which aims to produce system which can recognize both seen and unseen classes, by transferring learned knowledge from seen to unseen classes. Since seen and unseen classes share a common semantic space, extracting appropriate semantic information from images is essential for GZSL. In addition to semantic-related information (attributes), images also contain semantic-unrelated information (contents), which can degrade the classification performance of the model. Therefore, we propose a content-attribute disentanglement architecture which separates the content and attribute information of images. The proposed method is comprised of three major components: 1) a feature generation module for synthesizing unseen visual features; 2) a content-attribute disentanglement module for discriminating content and attribute codes from images; and 3) an attribute comparator module for measuring the compatibility between the attribute codes and the class prototypes which act as the ground truth. With extensive experiments, we show that our method achieves state-of-the-art and competitive results on four benchmark datasets in GZSL. Our method also outperforms the existing zero-shot learning methods in all of the datasets. Moreover, our method has the best accuracy as well in a zero-shot retrieval task. Our code is available at https://github.com/anyoojin1996/CA-GZSL.
引用
收藏
页码:58320 / 58331
页数:12
相关论文
共 53 条
[41]  
van der Maaten L, 2008, J MACH LEARN RES, V9, P2579
[42]  
Wah C., 2011, The Caltech-UCSD birds-200-2011 dataset
[43]  
Wan ZY, 2019, ADV NEUR IN, V32
[44]  
Wang C., 2021, ARXIV210401832
[45]   f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning [J].
Xian, Yongqin ;
Sharma, Saurabh ;
Schiele, Bernt ;
Akata, Zeynep .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10267-10276
[46]   Feature Generating Networks for Zero-Shot Learning [J].
Xian, Yongqin ;
Lorenz, Tobias ;
Schiele, Bernt ;
Akata, Zeynep .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :5542-5551
[47]   Zero-Shot Learning-A Comprehensive Evaluation of the Good, the Bad and the Ugly [J].
Xian, Yongqin ;
Lampert, Christoph H. ;
Schiele, Bernt ;
Akata, Zeynep .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (09) :2251-2265
[48]   Latent Embeddings for Zero-shot Classification [J].
Xian, Yongqin ;
Akata, Zeynep ;
Sharma, Gaurav ;
Nguyen, Quynh ;
Hein, Matthias ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :69-77
[49]   Region Graph Embedding Network for Zero-Shot Learning [J].
Xie, Guo-Sen ;
Liu, Li ;
Zhu, Fan ;
Zhao, Fang ;
Zhang, Zheng ;
Yao, Yazhou ;
Qin, Jie ;
Shao, Ling .
COMPUTER VISION - ECCV 2020, PT IV, 2020, 12349 :562-580
[50]  
Xu Wenjia, 2008, ARXIV200808290