Zero-shot recognition with latent visual attributes learning

被引:2
|
作者
Xie, Yurui [1 ,2 ]
He, Xiaohai [1 ]
Zhang, Jing [1 ]
Luo, Xiaodong [1 ]
机构
[1] Sichuan Univ, Coll Elect & Informat Engn, Chengdu, Peoples R China
[2] Chengdu Univ Informat Technol, Chengdu, Peoples R China
基金
中国国家自然科学基金;
关键词
Zero-shot learning; Human-designed attributes; Dictionary learning; Visual attributes; Semantic representation; CONVOLUTIONAL NEURAL-NETWORKS;
D O I
10.1007/s11042-020-09316-4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Zero-shot learning (ZSL) aims to recognize novel object categories by means of transferring knowledge extracted from the seen categories (source domain) to the unseen categories (target domain). Recently, most ZSL methods concentrate on learning a visual-semantic alignment to bridge image features and their semantic representations by relying solely on the human-designed attributes. However, few works study whether the human-designed attributes are discriminative enough for recognition task. To address this problem, we propose a couple semantic dictionaries (CSD) learning approach to exploit the latent visual attributes and align the visual-semantic spaces at the same time. Specifically, the learned visual attributes are elegantly incorporated into the semantic representation of image feature and then consolidate the discriminative visual cues for object recognition. In addition, existing ZSL methods suffer from the domain shift issue due to the source domain and target domain have completely separated label spaces. We further employ the visual-semantic alignment and latent visual attributes jointly from source domain to regularise the learning of target domain, which ensures the expansibility of information transfer across domains. We formulate this as an optimization problem on a unified objective and propose an iterative solver. Extensive experiments on two challenging benchmark datasets demonstrate that our proposed approach outperforms several state-of-the-art ZSL methods.
引用
收藏
页码:27321 / 27335
页数:15
相关论文
共 50 条
  • [31] Zero-shot learning via visual feature enhancement and dual classifier learning for image recognition
    Zhao, Peng
    Xue, Huihui
    Ji, Xia
    Liu, Huiting
    Han, Li
    INFORMATION SCIENCES, 2023, 642
  • [32] JOINT PROJECTION AND SUBSPACE LEARNING FOR ZERO-SHOT RECOGNITION
    Liu, Guangzhen
    Guan, Jiechao
    Zhang, Manli
    Zhang, Jianhong
    Wang, Zihao
    Lu, Zhiwu
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1228 - 1233
  • [33] Adaptive Fusion Learning for Compositional Zero-Shot Recognition
    Min, Lingtong
    Fan, Ziman
    Wang, Shunzhou
    Dou, Feiyang
    Li, Xin
    Wang, Binglu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 1193 - 1204
  • [34] Multimodal zero-shot learning for tactile texture recognition ☆
    Cao, Guanqun
    Jiang, Jiaqi
    Bollegala, Danushka
    Li, Min
    Luo, Shan
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2024, 176
  • [35] Learning complementary semantic information for zero-shot recognition
    Hu, Xiaoming
    Wang, Zilei
    Li, Junjie
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2023, 115
  • [36] Extreme Reverse Projection Learning for Zero-Shot Recognition
    Guan, Jiechao
    Zhao, An
    Lu, Zhiwu
    COMPUTER VISION - ACCV 2018, PT I, 2019, 11361 : 125 - 141
  • [37] Unifying Unsupervised Domain Adaptation and Zero-Shot Visual Recognition
    Wang, Qian
    Bu, Penghui
    Breckon, Toby P.
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [38] Survey on Knowledge-based Zero-shot Visual Recognition
    Feng Y.-G.
    Yu J.
    Sang J.-T.
    Yang P.-B.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (02): : 370 - 405
  • [39] Research progress of zero-shot learning
    Sun, Xiaohong
    Gu, Jinan
    Sun, Hongying
    APPLIED INTELLIGENCE, 2021, 51 (06) : 3600 - 3614
  • [40] Research progress of zero-shot learning
    Xiaohong Sun
    Jinan Gu
    Hongying Sun
    Applied Intelligence, 2021, 51 : 3600 - 3614