Semi-supervised class-conditional image synthesis with Semantics-guided Adaptive Feature Transforms

被引:3
作者
Huo, Xiaoyang [1 ]
Zhang, Yunfei [1 ]
Wu, Si [1 ]
机构
[1] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Guangdong, Peoples R China
关键词
Generative adversarial networks; Semi-supervised learning; Image synthesis; Feature transformation;
D O I
10.1016/j.patcog.2023.110022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generative Adversarial Networks (GANs) have become the mainstream models for class-conditional synthesis of high-fidelity images. To reduce the demand for labeled data, we propose a class-conditional GAN with Semantic-guided Adaptive Feature Transforms, which is referred to as SAFT-GAN for semi-supervised image synthesis. Instead of simply incorporating a classifier to infer the class labels of unlabeled data, the key idea behind SAFT-GAN is to incorporate class-semantic guidance in real-fake discrimination. More specifically, we adopt a two-head architecture for a discriminator: A label-embedded head identifies real and fake instances, conditioned on class label. To focus more on class-related regions, we exploit class-aware attention information to regularize this head via regional feature transforms. On the other hand, to make better use of unlabeled data, we design a label-free head, on which channel-adaptive feature transforms are imposed to fuse the discriminator and classifier features, such that the class semantics of synthesized images can be improved. Extensive experiments are performed to demonstrate how class-conditional image synthesis can benefit from the proposed feature transforms, and also demonstrate the superiority of SAFT-GAN.
引用
收藏
页数:10
相关论文
共 55 条
[1]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[2]  
Brock Andrew., 2019, ICLR
[3]  
Chen Tung-I, 2021, IEEE T MULTIMEDIA
[4]   Destruction and Construction Learning for Fine-grained Image Recognition [J].
Chen, Yue ;
Bai, Yalong ;
Zhang, Wei ;
Mei, Tao .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5152-5161
[5]  
Dai ZH, 2017, 31 ANN C NEURAL INFO, V30
[6]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[7]  
Deng Z., 2017, P NEUR INF PROC SYST
[8]  
Denton E, 2015, ADV NEUR IN, V28
[9]  
Dong J., 2019, ADV NEUR IN
[10]  
Dumoulin V., 2017, INT C LEARN REPR, DOI [10.48550/arXiv.1610.07629, DOI 10.48550/ARXIV.1610.07629]