Contrastive Prototype-Guided Generation for Generalized Zero-Shot Learning

被引:3
|
作者
Wang, Yunyun [1 ]
Mao, Jian [1 ]
Guo, Chenguang [1 ]
Chen, Songcan [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Nanjing, Peoples R China
[2] Nanjing Univ Aeronaut & Astronaut, Nanjing, Peoples R China
关键词
Zero-shot learning; Generative adversarial network; Contrastive prototype; Feature diversity;
D O I
10.1016/j.neunet.2024.106324
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generalized zero -shot learning (GZSL) aims to recognize both seen and unseen classes, while only samples from seen classes are available for training. The mainstream methods mitigate the lack of unseen training data by simulating the visual unseen samples. However, the sample generator is actually learned with just seen -class samples, and semantic descriptions of unseen classes are just provided to the pre -trained sample generator for unseen data generation, therefore, the generator would have bias towards seen categories, and the unseen generation quality, including both precision and diversity, is still the main learning challenge. To this end, we propose a Prototype -Guided Generation for Generalized Zero -Shot Learning (PGZSL), in order to guide the sample generation with unseen knowledge. First, unseen data generation is guided and rectified in PGZSL by contrastive prototypical anchors with both class semantic consistency and feature discriminability. Second, PGZSL introduces Certainty -Driven Mixup for generator to enrich the diversity of generated unseen samples, while suppress the generation of uncertain boundary samples as well. Empirical results over five benchmark datasets show that PGZSL significantly outperforms the SOTA methods in both ZSL and GZSL tasks.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Spherical Zero-Shot Learning
    Shen, Jiayi
    Xiao, Zehao
    Zhen, Xiantong
    Zhang, Lei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (02) : 634 - 645
  • [42] Feature Generation Approach with Indirect Domain Adaptation for Transductive Zero-shot Learning
    Huang S.
    Yang W.-L.
    Zhang Y.
    Zhang X.-H.
    Yang D.
    Ruan Jian Xue Bao/Journal of Software, 2022, 33 (11): : 4268 - 4284
  • [43] Learning Modality-Invariant Latent Representations for Generalized Zero-shot Learning
    Li, Jingjing
    Jing, Mengmeng
    Zhu, Lei
    Ding, Zhengming
    Lu, Ke
    Yang, Yang
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1348 - 1356
  • [44] Generalized zero-shot action recognition through reservation-based gate and semantic-enhanced contrastive learning
    Shang, Junyuan
    Niu, Chang
    Tao, Xiyuan
    Zhou, Zhiheng
    Yang, Junmei
    KNOWLEDGE-BASED SYSTEMS, 2024, 301
  • [45] A Joint Generative Model for Zero-Shot Learning
    Gao, Rui
    Hou, Xingsong
    Qin, Jie
    Liu, Li
    Zhu, Fan
    Zhang, Zhao
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT IV, 2019, 11132 : 631 - 646
  • [46] Consistency-guided pseudo labeling for transductive zero-shot learning
    Yang, Hairui
    Wang, Ning
    Wang, Zhihui
    Wang, Lei
    Li, Haojie
    INFORMATION SCIENCES, 2024, 670
  • [47] Visual-guided attentive attributes embedding for zero-shot learning
    Zhang, Rui
    Zhu, Qi
    Xu, Xiangyu
    Zhang, Daoqiang
    Huang, Sheng-Jun
    NEURAL NETWORKS, 2021, 143 : 709 - 718
  • [48] GENERALIZED ZERO-SHOT RECOGNITION THROUGH IMAGE-GUIDED SEMANTIC CLASSIFICATION
    Li, Fang
    Yeh, Mei-Chen
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 2483 - 2487
  • [49] Language-Augmented Pixel Embedding for Generalized Zero-Shot Learning
    Wang, Ziyang
    Gou, Yunhao
    Li, Jingjing
    Zhu, Lei
    Shen, Heng Tao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (03) : 1019 - 1030
  • [50] From Classical to Generalized Zero-Shot Learning: A Simple Adaptation Process
    Le Cacheux, Yannick
    Le Borgne, Herve
    Crucianu, Michel
    MULTIMEDIA MODELING, MMM 2019, PT II, 2019, 11296 : 465 - 477