Unseen image generating domain-free networks for generalized zero-shot learning

被引:4
|
作者
Kim, Hoseong [1 ]
Lee, Jewook [1 ]
Byun, Hyeran [1 ]
机构
[1] Yonsei Univ, Dept Comp Sci, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Generalized zero-shot learning; Unseen image generation; Extreme data bias; Data bias; Generative adversarial networks;
D O I
10.1016/j.neucom.2020.05.043
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In generalized zero-shot learning (GZSL), it is imperative to solve the bias problem due to extreme data imbalance between seen and unseen classes, i.e., unseen classes are misclassified as seen classes. We alleviate the bias problem by generating synthetic images of unseen classes. The most challenging part is that existing GAN methods are only focused on producing authentic seen images, so realistic unseen images cannot be generated. Specifically, we propose a novel zero-shot generative adversarial network (ZSGAN) which learns the relationship between images and attributes shared by seen and unseen classes. Unlike existing works that generate synthetic features of unseen classes, we can generate more generalizable realistic unseen images. For instance, generated unseen images can be used for zero-shot detection, segmentation, and image translation since images have spatial information. We also propose domain-free networks (DFN) that can effectively distinguish seen and unseen domains for input images. We evaluate our approaches on three challenging GZSL datasets, including CUB, FLO, and AWA2. We outperform the state-of-the-art methods and also empirically verify that our proposed method is a network-agnostic approach, i.e., the generated unseen images can improve performance regardless of the neural network type. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:67 / 77
页数:11
相关论文
共 50 条
  • [1] Unbiased feature generating for generalized zero-shot learning
    Niu, Chang
    Shang, Junyuan
    Huang, Junchu
    Yang, Junmei
    Song, Yuting
    Zhou, Zhiheng
    Zhou, Guoxu
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 89
  • [2] Generalized zero-shot learning for classifying unseen wafer map patterns
    Kim, Han Kyul
    Shim, Jaewoong
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [3] Cooperative Coupled Generative Networks for Generalized Zero-Shot Learning
    Sun, Liang
    Song, Junjie
    Wang, Ye
    Li, Baoyu
    IEEE ACCESS, 2020, 8 : 119287 - 119299
  • [4] Alleviating Domain Shift via Discriminative Learning for Generalized Zero-Shot Learning
    Ye, Yalan
    He, Yukun
    Pan, Tongjie
    Li, Jingjing
    Shen, Heng Tao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 1325 - 1337
  • [5] Residual-Prototype Generating Network for Generalized Zero-Shot Learning
    Zhang, Zeqing
    Li, Xiaofan
    Ma, Tai
    Gao, Zuodong
    Li, Cuihua
    Lin, Weiwei
    MATHEMATICS, 2022, 10 (19)
  • [6] A Review of Generalized Zero-Shot Learning Methods
    Pourpanah, Farhad
    Abdar, Moloud
    Luo, Yuxuan
    Zhou, Xinlei
    Wang, Ran
    Lim, Chee Peng
    Wang, Xi-Zhao
    Wu, Q. M. Jonathan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4051 - 4070
  • [7] Generating generalized zero-shot learning based on dual-path feature enhancement
    Chang, Xinyi
    Wang, Zhen
    Liu, Wenhao
    Gao, Limeng
    Yan, Bingshuai
    MULTIMEDIA SYSTEMS, 2024, 30 (05)
  • [8] Generalized Zero-Shot Learning using Generated Proxy Unseen Samples and Entropy Separation
    Gune, Omkar
    Banerjee, Biplab
    Chaudhuri, Subhasis
    Cuzzolin, Fabio
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 4262 - 4270
  • [9] Enhancing Domain-Invariant Parts for Generalized Zero-Shot Learning
    Zhang, Yang
    Feng, Songhe
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 6283 - 6291
  • [10] A Unified Approach for Conventional Zero-Shot, Generalized Zero-Shot, and Few-Shot Learning
    Rahman, Shafin
    Khan, Salman
    Porikli, Fatih
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (11) : 5652 - 5667