Dual Projective Zero-Shot Learning Using Text Descriptions

被引:7
|
作者
Rao, Yunbo [1 ]
Yang, Ziqiang [1 ]
Zeng, Shaoning [2 ]
Wang, Qifeng [3 ]
Pu, Jiansu [4 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, 4,Sect 2,North Jianshe Rd, Chengdu 610054, Sichuan, Peoples R China
[2] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Huzhou, Chengdu 313000, Sichuan, Peoples R China
[3] Google Berkeley, Berkeley, CA 94720 USA
[4] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, 4,Sect 2,North Jianshe Rd, Chengdu 610054, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Zero-shot learning; generalized zero-shot learning; autoencoder; inductive zero-shot learning;
D O I
10.1145/3514247
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Zero-shot learning (ZSL) aims to recognize image instances of unseen classes solely based on the semantic descriptions of the unseen classes. In this field, Generalized Zero-Shot Learning (GZSL) is a challenging problem in which the images of both seen and unseen classes are mixed in the testing phase of learning. Existing methods formulate GZSL as a semantic-visual correspondence problem and apply generative models such as Generative Adversarial Networks and Variational Autoencoders to solve the problem. However, these methods suffer from the bias problem since the images of unseen classes are often misclassified into seen classes. In this work, a novel model named the Dual Projective model for Zero-Shot Learning (DPZSL) is proposed using text descriptions. In order to alleviate the bias problem, we leverage two autoencoders to project the visual and semantic features into a latent space and evaluate the embeddings by a visual-semantic correspondence loss function. An additional novel classifier is also introduced to ensure the discriminability of the embedded features. Our method focuses on a more challenging inductive ZSL setting in which only the labeled data from seen classes are used in the training phase. The experimental results, obtained from two popular datasets-Caltech-UCSD Birds-200-2011 (CUB) and North America Birds (NAB)-show that the proposed DPZSL model significantly outperforms both the inductive ZSL and GZSL settings. Particularly in the GZSL setting, our model yields an improvement up to 15.2% in comparison with state-of-the-art CANZSL on datasets CUB and NAB with two splittings.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] A transformer-based dual contrastive learning approach for zero-shot learning
    Lei, Yu
    Jing, Ran
    Li, Fangfang
    Gao, Quanxue
    Deng, Cheng
    NEUROCOMPUTING, 2025, 626
  • [22] Using Pseudo-Labelled Data for Zero-Shot Text Classification
    Wang, Congcong
    Nulty, Paul
    Lillis, David
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2022), 2022, 13286 : 35 - 46
  • [23] Learning semantic ambiguities for zero-shot learning
    Hanouti, Celina
    Le Borgne, Herve
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (26) : 40745 - 40759
  • [24] Learning semantic ambiguities for zero-shot learning
    Celina Hanouti
    Hervé Le Borgne
    Multimedia Tools and Applications, 2023, 82 : 40745 - 40759
  • [25] Zero-Shot Program Representation Learning
    Cui, Nan
    Jiang, Yuze
    Gu, Xiaodong
    Shen, Beijun
    30TH IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2022), 2022, : 60 - 70
  • [26] Practical Aspects of Zero-Shot Learning
    Saad, Elie
    Paprzycki, Marcin
    Ganzha, Maria
    COMPUTATIONAL SCIENCE, ICCS 2022, PT II, 2022, : 88 - 95
  • [27] Research progress of zero-shot learning
    Sun, Xiaohong
    Gu, Jinan
    Sun, Hongying
    APPLIED INTELLIGENCE, 2021, 51 (06) : 3600 - 3614
  • [28] Research progress of zero-shot learning
    Xiaohong Sun
    Jinan Gu
    Hongying Sun
    Applied Intelligence, 2021, 51 : 3600 - 3614
  • [29] Zero-Shot Learning With Transferred Samples
    Guo, Yuchen
    Ding, Guiguang
    Han, Jungong
    Gao, Yue
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (07) : 3277 - 3290
  • [30] Research and Development on Zero-Shot Learning
    Zhang L.-N.
    Zuo X.
    Liu J.-W.
    Zidonghua Xuebao/Acta Automatica Sinica, 2020, 46 (01): : 1 - 23