Self-Ensembling GAN for Cross-Domain Semantic Segmentation

被引:7
|
作者
Xu, Yonghao [1 ,2 ]
He, Fengxiang [3 ]
Du, Bo [4 ,5 ]
Tao, Dacheng [3 ]
Zhang, Liangpei [1 ]
机构
[1] Wuhan Univ, State Key Lab Informat Engn Surveying Mapping & R, Wuhan 430079, Peoples R China
[2] Inst Adv Res Artificial Intelligence IARAI, A-1030 Vienna, Austria
[3] JDcom Inc, JD Explore Acad, Beijing, Peoples R China
[4] Wuhan Univ, Sch Comp Sci, Natl Engn Res Ctr Multimedia Software, Inst Artificial Intelligence, Wuhan 430079, Peoples R China
[5] Wuhan Univ, Hubei Key Lab Multimedia & Network Commun Engn, Wuhan 430079, Peoples R China
关键词
Deep learning; domain adaptation; semantic segmentation; adversarial learning; DEEP NEURAL-NETWORK;
D O I
10.1109/TMM.2022.3229976
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have greatly contributed to the performance gains in semantic segmentation. Nevertheless, training DNNs generally requires large amounts of pixel-level labeled data, which is expensive and time-consuming to collect in practice. To mitigate the annotation burden, this paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation. In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN. Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model, the latter of which is a common barrier shared by most adversarial training-based methods. We theoretically analyze SE-GAN and provide an O(1/root N) generalization bound (N is the training sample size), which suggests controlling the discriminator's hypothesis complexity to enhance the generalizability. Accordingly, we choose a simple network as the discriminator. Extensive and systematic experiments in two standard settings demonstrate that the proposed method significantly outperforms current state-of-the-art approaches.
引用
收藏
页码:7837 / 7850
页数:14
相关论文
empty
未找到相关数据