Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and Uncurated Unlabeled Data

被引:0
作者
Katsumata, Kai [1 ]
Duc Minh Vo [1 ]
Harada, Tatsuya [1 ,2 ]
Nakayama, Hideki [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
[2] RIKEN, Wako, Saitama, Japan
来源
2024 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION, WACV 2024 | 2024年
关键词
D O I
10.1109/WACV57701.2024.00524
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Label-noise or curated unlabeled data are used to compensate for the assumption of clean labeled data in training the conditional generative adversarial network; however, satisfying such an extended assumption is occasionally laborious or impractical. As a step towards generative modeling accessible to everyone, we introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated unlabeled data during training: (i) closed-set and open-set label noise in labeled data and (ii) closed-set and open-set unlabeled data. To combat it, we propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data and correcting wrong labels for labeled data. Unlike popular curriculum learning, which uses a threshold to pick the training samples, our soft curriculum controls the effect of each training instance by using the weights predicted by the auxiliary classifier, resulting in the preservation of useful samples while ignoring harmful ones. Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance. In particular, the proposed approach matches the performance of (semi-)supervised GANs even with less than half the labeled data.(1)
引用
收藏
页码:5311 / 5320
页数:10
相关论文
共 48 条
[1]   Addressing out-of-distribution label noise in webly-labelled data [J].
Albert, Paul ;
Ortego, Diego ;
Arazo, Eric ;
O'Connor, Noel E. ;
McGuinness, Kevin .
2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, :2393-2402
[2]  
Angluin D., 1988, Machine Learning, V2, P343, DOI 10.1007/BF00116829
[3]  
[Anonymous], 2019, ICML
[4]  
[Anonymous], 2022, CVPR, DOI DOI 10.1109/CVPR52688.2022.01039
[5]  
[Anonymous], 2018, ICLR
[6]  
Brock A., 2018, INT C LEARN REPR, P1, DOI DOI 10.1145/3205873.3205877
[7]  
Cascante-Bonilla Paola, 2021, AAAI
[8]   Self-Supervised GANs via Auxiliary Rotation Loss [J].
Chen, Ting ;
Zhai, Xiaohua ;
Ritter, Marvin ;
Lucic, Mario ;
Houlsby, Neil .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :12146-12155
[9]   KD-DLGAN: Data Limited Image Generation via Knowledge Distillation [J].
Cui, Kaiwen ;
Yu, Yingchen ;
Zhan, Fangneng ;
Liao, Shengcai ;
Lu, Shijian ;
Xing, Eric .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, :3872-3882
[10]  
Ghosh A, 2017, AAAI CONF ARTIF INTE, P1919