Leveraging Self-Distillation and Disentanglement Network to Enhance Visual-Semantic Feature Consistency in Generalized Zero-Shot Learning

被引:0
作者
Liu, Xiaoming [1 ,2 ,3 ]
Wang, Chen [1 ,2 ]
Yang, Guan [1 ,2 ]
Wang, Chunhua [4 ]
Long, Yang [5 ]
Liu, Jie [3 ,6 ]
Zhang, Zhiyuan [1 ,2 ]
机构
[1] Zhongyuan Univ Technol, Sch Comp Sci, Zhengzhou 450007, Peoples R China
[2] Zhengzhou Key Lab Text Proc & Image Understanding, Zhengzhou 450007, Peoples R China
[3] Res Ctr Language Intelligence China, Beijing 100089, Peoples R China
[4] Huanghuai Univ, Sch Animat Acad, Zhumadian 463000, Peoples R China
[5] Univ Durham, Dept Comp Sci, Durham DH1 3LE, England
[6] North China Univ Technol, Sch Informat Sci, Beijing 100144, Peoples R China
基金
中国国家自然科学基金;
关键词
generalized zero-shot learning; self-distillation; disentanglement network; visual-semantic feature consistency;
D O I
10.3390/electronics13101977
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Generalized zero-shot learning (GZSL) aims to simultaneously recognize both seen classes and unseen classes by training only on seen class samples and auxiliary semantic descriptions. Recent state-of-the-art methods infer unseen classes based on semantic information or synthesize unseen classes using generative models based on semantic information, all of which rely on the correct alignment of visual-semantic features. However, they often overlook the inconsistency between original visual features and semantic attributes. Additionally, due to the existence of cross-modal dataset biases, the visual features extracted and synthesized by the model may also mismatch with some semantic features, which could hinder the model from properly aligning visual-semantic features. To address this issue, this paper proposes a GZSL framework that enhances the consistency of visual-semantic features using a self-distillation and disentanglement network (SDDN). The aim is to utilize the self-distillation and disentanglement network to obtain semantically consistent refined visual features and non-redundant semantic features to enhance the consistency of visual-semantic features. Firstly, SDDN utilizes self-distillation technology to refine the extracted and synthesized visual features of the model. Subsequently, the visual-semantic features are then disentangled and aligned using a disentanglement network to enhance the consistency of the visual-semantic features. Finally, the consistent visual-semantic features are fused to jointly train a GZSL classifier. Extensive experiments demonstrate that the proposed method achieves more competitive results on four challenging benchmark datasets (AWA2, CUB, FLO, and SUN).
引用
收藏
页数:18
相关论文
共 46 条
  • [1] Indirect visual-semantic alignment for generalized zero-shot recognition
    Chen, Yan-He
    Yeh, Mei-Chen
    MULTIMEDIA SYSTEMS, 2024, 30 (02)
  • [2] Zero-shot learning via visual-semantic aligned autoencoder
    Wei, Tianshu
    Huang, Jinjie
    Jin, Cong
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (08) : 14081 - 14095
  • [3] Contrastive semantic disentanglement in latent space for generalized zero-shot learning
    Fan, Wentao
    Liang, Chen
    Wang, Tian
    KNOWLEDGE-BASED SYSTEMS, 2022, 257
  • [4] Augmented semantic feature based generative network for generalized zero-shot learning
    Li, Zhiqun
    Chen, Qiong
    Liu, Qingfa
    NEURAL NETWORKS, 2021, 143 : 1 - 11
  • [5] Contrastive visual feature filtering for generalized zero-shot learning
    Meng, Shixuan
    Jiang, Rongxin
    Tian, Xiang
    Zhou, Fan
    Chen, Yaowu
    Liu, Junjie
    Shen, Chen
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024,
  • [6] Content-Attribute Disentanglement for Generalized Zero-Shot Learning
    An, Yoojin
    Kim, Sangyeon
    Liang, Yuxuan
    Zimmermann, Roger
    Kim, Dongho
    Kim, Jihie
    IEEE ACCESS, 2022, 10 : 58320 - 58331
  • [7] Attribute disentanglement and re-entanglement for generalized zero-shot learning
    Zhou, Quan
    Liang, Yucuan
    Zhang, Zhenqi
    Cao, Wenming
    PATTERN RECOGNITION LETTERS, 2024, 186 : 1 - 7
  • [8] Semantic Contrastive Embedding for Generalized Zero-Shot Learning
    Han, Zongyan
    Fu, Zhenyong
    Chen, Shuo
    Yang, Jian
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (11) : 2606 - 2622
  • [9] Semantic Contrastive Embedding for Generalized Zero-Shot Learning
    Zongyan Han
    Zhenyong Fu
    Shuo Chen
    Jian Yang
    International Journal of Computer Vision, 2022, 130 : 2606 - 2622
  • [10] Joint Visual and Semantic Optimization for zero-shot learning
    Wu, Hanrui
    Yan, Yuguang
    Chen, Sentao
    Huang, Xiangkang
    Wu, Qingyao
    Ng, Michael K.
    KNOWLEDGE-BASED SYSTEMS, 2021, 215 (215)