Enhancing Stability in Training Conditional Generative Adversarial Networks via Selective Data Matching

被引:0
作者
Kong, Kyeongbo [1 ]
Kim, Kyunghun [2 ]
Kang, Suk-Ju [3 ]
机构
[1] Pusan Natl Univ, Dept Elect & Elect Engn, Pusan 46241, South Korea
[2] NHN Cloud, Seongnam Si 13487, South Korea
[3] Sogang Univ, Dept Elect Engn, Seoul 04017, South Korea
基金
新加坡国家研究基金会;
关键词
Training; Generators; Generative adversarial networks; Optimization; Toy manufacturing industry; Vectors; Controllability; Condition monitoring; Conditional GAN; content optimization; distribution matching; diversity; prioritization;
D O I
10.1109/ACCESS.2024.3439561
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Conditional generative adversarial networks (cGANs) have demonstrated remarkable success due to their class-wise controllability and superior quality for complex generation tasks. Typical cGANs solve the joint distribution matching problem by decomposing two easier sub-problems: marginal matching and conditional matching. In this paper, we proposes a simple but effective training methodology, selective focusing learning, which enforces the discriminator and generator to learn easy samples of each class rapidly while maintaining diversity. Our key idea is to selectively apply conditional and joint matching for the data in each mini-batch.Specifically, we first select the samples with the highest scores when sorted using the conditional term of the discriminator outputs (real and generated samples). Then we optimize the model using the selected samples with only conditional matching and the other samples with joint matching. From our toy experiments, we found that it is the best to apply only conditional matching to certain samples due to the content-aware optimization of the discriminator. We conducted experiments on ImageNet ( 64x64 and 128x128 ), CIFAR-10, CIFAR-100 datasets, and Mixture of Gaussian, noisy label settings to demonstrate that the proposed method can substantially (up to 35.18% in terms of FID) improve all indicators with 10 independent trials. Code is available at https://github.com/pnu-cvsp/Enhancing-Stability-in-Training-Conditional-GAN-via-Selective-Data-Matching .
引用
收藏
页码:119647 / 119659
页数:13
相关论文
共 70 条
[1]  
[Anonymous], 2011, Advances in Neural Information Processing Systems
[2]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[3]  
Arpit D, 2017, PR MACH LEARN RES, V70
[4]   A review of instance selection methods [J].
Arturo Olvera-Lopez, J. ;
Ariel Carrasco-Ochoa, J. ;
Francisco Martinez-Trinidad, J. ;
Kittler, Josef .
ARTIFICIAL INTELLIGENCE REVIEW, 2010, 34 (02) :133-143
[5]  
Azadi S, 2019, Arxiv, DOI arXiv:1810.06758
[6]   Learning Deep Architectures for AI [J].
Bengio, Yoshua .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2009, 2 (01) :1-127
[7]  
Brock A, 2019, Arxiv, DOI arXiv:1809.11096
[8]  
Caron M, 2021, Arxiv, DOI arXiv:2006.09882
[9]  
Chen X, 2016, Arxiv, DOI arXiv:1606.03657
[10]   Webly Supervised Learning of Convolutional Networks [J].
Chen, Xinlei ;
Gupta, Abhinav .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1431-1439