RE-GZSL: Relation Extrapolation for Generalized Zero-Shot Learning

被引:0
作者
Wu, Yao [1 ]
Kong, Xia [1 ]
Xie, Yuan [2 ,3 ]
Qu, Yanyun [1 ]
机构
[1] Xiamen Univ, Sch Informat, Xiamen 361005, Peoples R China
[2] EastChina Normal Univ, Sch Comp Sci & Technol, Shanghai 200062, Peoples R China
[3] East China Normal Univ, Chongqing Inst, Chongqing 401120, Peoples R China
基金
中国国家自然科学基金;
关键词
Zero-shot learning; generalized zero-shot learning; relation extrapolation; generative adversarial networks; contrastive learning; image classification;
D O I
10.1109/TCSVT.2024.3486074
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unlike Conventional Zero-Shot Learning (CZSL) which only focuses on the recognition of unseen classes by using a classifier trained on seen classes and semantic embeddings, Generalized Zero-Shot Learning (GZSL) requires a classifier trained on seen classes to recognize objects from both seen and unseen classes. To tackle this problem, feature generative-based models have been proposed to synthesize visual features for unseen classes conditioned on their semantic descriptors. However, they treat these semantic descriptors as independent individuals without exploring their structural relations among categories. We propose a novel approach, dubbed Relation Extrapolation based feature generation for GZSL (RE-GZSL), which generates features of unseen classes by borrowing some features that are extrapolated from seen classes based on semantic relations. In RE-GZSL, a visual-semantic relations alignment loss and an instance-prototype contrastive loss are presented to align visual relations with semantic relations. To maintain the information of the visual features before and after the alignment, a discrimination preservation loss is further introduced. Besides, a feature mixing module is built to synthesize features for unseen classes, which are more realistic and tightly related to seen classes. Experimental results demonstrate that RE-GZSL outperforms competitors on four benchmark datasets. Comprehensive ablation studies and analyses are provided to dissect what factors led to this success. Code is available at: https://github.com/Barcaaaa/RE-GZSL.
引用
收藏
页码:1973 / 1986
页数:14
相关论文
共 50 条
  • [41] Dissimilarity Representation Learning for Generalized Zero-Shot Recognition
    Yang, Gang
    Liu, Jinlu
    Xu, Jieping
    Li, Xirong
    PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 2032 - 2039
  • [42] Class-Incremental Generalized Zero-Shot Learning
    Sun, Zhenfeng
    Feng, Rui
    Fu, Yanwei
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (24) : 38233 - 38247
  • [43] Contrastive semantic disentanglement in latent space for generalized zero-shot learning
    Fan, Wentao
    Liang, Chen
    Wang, Tian
    KNOWLEDGE-BASED SYSTEMS, 2022, 257
  • [44] Self-Assembled Generative Framework for Generalized Zero-Shot Learning
    Gao, Mengyu
    Dong, Qiulei
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 914 - 924
  • [45] Generation-based contrastive model with semantic alignment for generalized zero-shot learning
    Yang, Jingqi
    Shen, Qi
    Xie, Cheng
    IMAGE AND VISION COMPUTING, 2023, 137
  • [46] Entropy-Based Uncertainty Calibration for Generalized Zero-Shot Learning
    Chen, Zhi
    Huang, Zi
    Li, Jingjing
    Zhang, Zheng
    DATABASES THEORY AND APPLICATIONS (ADC 2021), 2021, 12610 : 139 - 151
  • [47] Generalized zero-shot learning via discriminative and transferable disentangled representations
    Zhang, Chunyu
    Li, Zhanshan
    NEURAL NETWORKS, 2025, 183
  • [48] Generalized zero-shot learning for classifying unseen wafer map patterns
    Kim, Han Kyul
    Shim, Jaewoong
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [49] LVQ Treatment for Zero-Shot Learning
    Ismailoglu, Firat
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2023, 31 (01) : 216 - 237
  • [50] Prototype rectification for zero-shot learning
    Yi, Yuanyuan
    Zeng, Guolei
    Ren, Bocheng
    Yang, Laurence T.
    Chai, Bin
    Li, Yuxin
    PATTERN RECOGNITION, 2024, 156