Learning Modality-Invariant Latent Representations for Generalized Zero-shot Learning

被引:25
|
作者
Li, Jingjing [1 ]
Jing, Mengmeng [1 ]
Zhu, Lei [2 ]
Ding, Zhengming [3 ]
Lu, Ke [1 ]
Yang, Yang [1 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Peoples R China
[2] Shandong Normal Univ, Jinan, Shandong, Peoples R China
[3] Indiana Univ Purdue Univ, Indianapolis, IN 46202 USA
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
基金
中国国家自然科学基金;
关键词
Zero-shot learning; mutual information estimation; generalized ZSL; variational autoencoders;
D O I
10.1145/3394171.3413503
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, feature generating methods have been successfully applied to zero-shot learning (ZSL). However, most previous approaches only generate visual representations for zero-shot recognition. In fact, typical ZSL is a classic multi-modal learning protocol which consists of a visual space and a semantic space. In this paper, therefore, we present a new method which can simultaneously generate both visual representations and semantic representations so that the essential multi-modal information associated with unseen classes can be captured. Specifically, we address the most challenging issue in such a paradigm, i.e., how to handle the domain shift and thus guarantee that the learned representations are modality-invariant. To this end, we propose two strategies: 1) leveraging the mutual information between the latent visual representations and the semantic representations; 2) maximizing the entropy of the joint distribution of the two latent representations. By leveraging the two strategies, we argue that the two modalities can be well aligned. At last, extensive experiments on five widely used datasets verify that the proposed method is able to significantly outperform previous the state-of-the-arts.
引用
收藏
页码:1348 / 1356
页数:9
相关论文
共 50 条
  • [41] Variational Disentangle Zero-Shot Learning
    Su, Jie
    Wan, Jinhao
    Li, Taotao
    Li, Xiong
    Ye, Yuheng
    MATHEMATICS, 2023, 11 (16)
  • [42] Detecting Errors with Zero-Shot Learning
    Wu, Xiaoyu
    Wang, Ning
    ENTROPY, 2022, 24 (07)
  • [43] Prototype rectification for zero-shot learning
    Yi, Yuanyuan
    Zeng, Guolei
    Ren, Bocheng
    Yang, Laurence T.
    Chai, Bin
    Li, Yuxin
    PATTERN RECOGNITION, 2024, 156
  • [44] Contrastive Prototype-Guided Generation for Generalized Zero-Shot Learning
    Wang, Yunyun
    Mao, Jian
    Guo, Chenguang
    Chen, Songcan
    NEURAL NETWORKS, 2024, 176
  • [45] A review on multimodal zero-shot learning
    Cao, Weipeng
    Wu, Yuhao
    Sun, Yixuan
    Zhang, Haigang
    Ren, Jin
    Gu, Dujuan
    Wang, Xingkai
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2023, 13 (02)
  • [46] Attribute subspaces for zero-shot learning
    Zhou, Lei
    Liu, Yang
    Bai, Xiao
    Li, Na
    Yu, Xiaohan
    Zhou, Jun
    Hancock, Edwin R.
    PATTERN RECOGNITION, 2023, 144
  • [47] LVQ Treatment for Zero-Shot Learning
    Ismailoglu, Firat
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2023, 31 (01) : 216 - 237
  • [48] Cross-modal propagation network for generalized zero-shot learning
    Guo, Ting
    Liang, Jianqing
    Liang, Jiye
    Xie, Guo-Sen
    PATTERN RECOGNITION LETTERS, 2022, 159 : 125 - 131
  • [49] RE-GZSL: Relation Extrapolation for Generalized Zero-Shot Learning
    Wu, Yao
    Kong, Xia
    Xie, Yuan
    Qu, Yanyun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 1973 - 1986
  • [50] GSMFlow: Generation Shifts Mitigating Flow for Generalized Zero-Shot Learning
    Chen, Zhi
    Luo, Yadan
    Wang, Sen
    Li, Jingjing
    Huang, Zi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 5374 - 5385