Leveraging Self-Distillation and Disentanglement Network to Enhance Visual-Semantic Feature Consistency in Generalized Zero-Shot Learning

被引:0
作者
Liu, Xiaoming [1 ,2 ,3 ]
Wang, Chen [1 ,2 ]
Yang, Guan [1 ,2 ]
Wang, Chunhua [4 ]
Long, Yang [5 ]
Liu, Jie [3 ,6 ]
Zhang, Zhiyuan [1 ,2 ]
机构
[1] Zhongyuan Univ Technol, Sch Comp Sci, Zhengzhou 450007, Peoples R China
[2] Zhengzhou Key Lab Text Proc & Image Understanding, Zhengzhou 450007, Peoples R China
[3] Res Ctr Language Intelligence China, Beijing 100089, Peoples R China
[4] Huanghuai Univ, Sch Animat Acad, Zhumadian 463000, Peoples R China
[5] Univ Durham, Dept Comp Sci, Durham DH1 3LE, England
[6] North China Univ Technol, Sch Informat Sci, Beijing 100144, Peoples R China
基金
中国国家自然科学基金;
关键词
generalized zero-shot learning; self-distillation; disentanglement network; visual-semantic feature consistency;
D O I
10.3390/electronics13101977
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Generalized zero-shot learning (GZSL) aims to simultaneously recognize both seen classes and unseen classes by training only on seen class samples and auxiliary semantic descriptions. Recent state-of-the-art methods infer unseen classes based on semantic information or synthesize unseen classes using generative models based on semantic information, all of which rely on the correct alignment of visual-semantic features. However, they often overlook the inconsistency between original visual features and semantic attributes. Additionally, due to the existence of cross-modal dataset biases, the visual features extracted and synthesized by the model may also mismatch with some semantic features, which could hinder the model from properly aligning visual-semantic features. To address this issue, this paper proposes a GZSL framework that enhances the consistency of visual-semantic features using a self-distillation and disentanglement network (SDDN). The aim is to utilize the self-distillation and disentanglement network to obtain semantically consistent refined visual features and non-redundant semantic features to enhance the consistency of visual-semantic features. Firstly, SDDN utilizes self-distillation technology to refine the extracted and synthesized visual features of the model. Subsequently, the visual-semantic features are then disentangled and aligned using a disentanglement network to enhance the consistency of the visual-semantic features. Finally, the consistent visual-semantic features are fused to jointly train a GZSL classifier. Extensive experiments demonstrate that the proposed method achieves more competitive results on four challenging benchmark datasets (AWA2, CUB, FLO, and SUN).
引用
收藏
页数:18
相关论文
共 46 条
  • [41] Cross-modal distribution alignment embedding network for generalized zero-shot learning
    Li, Qin
    Hou, Mingzhen
    Lai, Hong
    Yang, Ming
    NEURAL NETWORKS, 2022, 148 : 176 - 182
  • [42] Domain-aware multi-modality fusion network for generalized zero-shot learning
    Wang, Jia
    Wang, Xiao
    Zhang, Han
    NEUROCOMPUTING, 2022, 488 : 23 - 35
  • [43] A Generative Approach to Audio-Visual Generalized Zero-Shot Learning: Combining Contrastive and Discriminative Techniques
    Zheng, Qichen
    Hong, Jie
    Farazi, Moshiur
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [44] Self-supervised learning of pseudo classes for generalized zero-shot fine-grained recognition
    Chen Y.-H.
    Yeh M.-C.
    Multimedia Tools and Applications, 2025, 84 (10) : 7915 - 7930
  • [45] Generalized Zero-Shot Learning Via Multi-Modal Aggregated Posterior Aligning Neural Network
    Chen, Xingyu
    Li, Jin
    Lan, Xuguang
    Zheng, Nanning
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 177 - 187
  • [46] Center-VAE with discriminative and semantic-relevant fine-tuning features for generalized zero-shot learning
    Zhai, Zhibo
    Li, Xiao
    Chang, Zhonghao
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2023, 111