Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation

被引:0
作者
Zhang, Qiming [1 ]
Zhang, Jing [1 ]
Liu, Wei [2 ]
Tao, Dacheng [1 ]
机构
[1] Univ Sydney, Fac Engn, Sch Comp Sci, UBTECH Sydney AI Ctr, Darlington, NSW 2008, Australia
[2] Tencent AI Lab, Bellevue, WA USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019) | 2019年 / 32卷
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation (UDA) aims to enhance the generalization capability of a certain model from a source domain to a target domain. UDA is of particular significance since no extra effort is devoted to annotating target domain samples. However, the different data distributions in the two domains, or domain shift/discrepancy, inevitably compromise the UDA performance. Although there has been a progress in matching the marginal distributions between two domains, the classifier favors the source domain features and makes incorrect predictions on the target domain due to category-agnostic feature alignment. In this paper, we propose a novel category anchor-guided (CAG) UDA model for semantic segmentation, which explicitly enforces category-aware feature alignment to learn shared discriminative features and classifiers simultaneously. First, the category-wise centroids of the source domain features are used as guided anchors to identify the active features in the target domain and also assign them pseudo-labels. Then, we leverage an anchor-based pixel-level distance loss and a discriminative loss to drive the intra-category features closer and the inter-category features further apart, respectively. Finally, we devise a stagewise training mechanism to reduce the error accumulation and adapt the proposed model progressively. Experiments on both the GTA5!Cityscapes and SYNTHIA!Cityscapes scenarios demonstrate the superiority of our CAG-UDA model over the state-of-the-art methods. The code is available at https://github.com/RogerZhangzz/CAG_UDA.
引用
收藏
页数:11
相关论文
共 47 条
[1]  
[Anonymous], 2018, EUR C COMP VIS
[2]  
[Anonymous], ARXIV181112833
[3]  
[Anonymous], IEEE T KNOWLEDGE DAT
[4]  
[Anonymous], 2017, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2017.322
[5]  
[Anonymous], 2017, COMMUN ACM, DOI DOI 10.1145/3065386
[6]  
[Anonymous], 2015, PROC CVPR IEEE
[7]  
[Anonymous], ADV NEURAL INFORM PR, DOI DOI 10.1109/TPAMI.2016.2577031
[8]  
[Anonymous], 2017, P IEEE C COMP VIS PA
[9]  
[Anonymous], 2018, ARXIV181108585
[10]  
[Anonymous], ARXIV190410620