Attribute-Induced Bias Eliminating for Transductive Zero-Shot Learning

被引:5
作者
Yao, Hantao [1 ]
Min, Shaobo [2 ]
Zhang, Yongdong [2 ]
Xu, Changsheng [1 ,3 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[2] Univ Sci & Technol China, Natl Engn Lab Brain Inspired Intelligence Technol, Hefei 230026, Peoples R China
[3] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Semantics; Visualization; Bridges; Training; Knowledge transfer; Image recognition; Topology; Transductive Zero-Shot Learning; Graph Attribute Embedding; Attribute-Induced Bias Eliminating; Semantic-Visual Alignment;
D O I
10.1109/TMM.2021.3074252
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Transductive zero-shot learning is designed to recognize unseen categories by aligning both visual and semantic information in a joint embedding space. Four types of domain biases exist in Transductive ZSL, i.e., visual bias and semantic bias in two domains, and two visual-semantic biases exist in the seen and unseen domains. However, the existing work has only focused on specific components of these topics, leading to severe semantic ambiguity during knowledge transfer. To solve this problem, we propose a novel attribute-induced bias eliminating (AIBE) module for Transductive ZSL. Specifically, for the visual bias between the two domains, the mean-teacher module is first used to bridge the visual representation discrepancy between the two domains using unsupervised learning and unlabeled images. Then, an attentional graph attribute embedding process is proposed to reduce the semantic bias between seen and unseen categories using a graph operation to describe the semantic relationship between categories. To reduce semantic-visual bias in the seen domain, we align the visual center of each category with the corresponding semantic attributes instead of with the individual visual data point, which preserves the semantic relationship in the embedding space. Finally, for the semantic-visual bias in the unseen domain, an unseen semantic alignment constraint is designed to align visual and semantic space using an unsupervised process. The evaluations on several benchmarks demonstrate the effectiveness of the proposed method, e.g., 82.8%/75.5%, 97.1%/82.5%, and 73.2%/52.1% for Conventional/Generalized ZSL settings for CUB, AwA2, and SUN datasets, respectively.
引用
收藏
页码:1933 / 1942
页数:10
相关论文
共 50 条
  • [31] Adaptive Fusion Learning for Compositional Zero-Shot Recognition
    Min, Lingtong
    Fan, Ziman
    Wang, Shunzhou
    Dou, Feiyang
    Li, Xin
    Wang, Binglu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 1193 - 1204
  • [32] Towards Effective Deep Embedding for Zero-Shot Learning
    Zhang, Lei
    Wang, Peng
    Liu, Lingqiao
    Shen, Chunhua
    Wei, Wei
    Zhang, Yanning
    van den Hengel, Anton
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (09) : 2843 - 2852
  • [33] Learning MLatent Representations for Generalized Zero-Shot Learning
    Ye, Yalan
    Pan, Tongjie
    Luo, Tonghoujun
    Li, Jingjing
    Shen, Heng Tao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 2252 - 2265
  • [34] Bidirectional Mapping Coupled GAN for Generalized Zero-Shot Learning
    Shermin, Tasfia
    Teng, Shyh Wei
    Sohel, Ferdous
    Murshed, Manzur
    Lu, Guojun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 721 - 733
  • [35] Dual Prototype Contrastive Network for Generalized Zero-Shot Learning
    Jiang, Huajie
    Li, Zhengxian
    Hu, Yongli
    Yin, Baocai
    Yang, Jian
    van den Hengel, Anton
    Yang, Ming-Hsuan
    Qi, Yuankai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (02) : 1111 - 1122
  • [36] Towards Discriminative Feature Generation for Generalized Zero-Shot Learning
    Ge, Jiannan
    Xie, Hongtao
    Li, Pandeng
    Xie, Lingxi
    Min, Shaobo
    Zhang, Yongdong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 10514 - 10529
  • [37] Disentangling Semantic-to-Visual Confusion for Zero-Shot Learning
    Ye, Zihan
    Hu, Fuyuan
    Lyu, Fan
    Li, Linyan
    Huang, Kaizhu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 2828 - 2840
  • [38] Cooperative Coupled Generative Networks for Generalized Zero-Shot Learning
    Sun, Liang
    Song, Junjie
    Wang, Ye
    Li, Baoyu
    IEEE ACCESS, 2020, 8 : 119287 - 119299
  • [39] Transductive Zero-Shot Action Recognition via Visually Connected Graph Convolutional Networks
    Xu, Yangyang
    Han, Chu
    Qin, Jing
    Xu, Xuemiao
    Han, Guoqiang
    He, Shengfeng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (08) : 3761 - 3769
  • [40] Knowledge Distillation Classifier Generation Network for Zero-Shot Learning
    Yu, Yunlong
    Li, Bin
    Ji, Zhong
    Han, Jungong
    Zhang, Zhongfei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (06) : 3183 - 3194