Contextual-Relation Consistent Domain Adaptation for Semantic Segmentation

被引:86
作者
Huang, Jiaxing [1 ]
Lu, Shijian [1 ]
Guan, Dayan [1 ]
Zhang, Xiaobing [2 ]
机构
[1] Nanyang Technol Univ, 50 Nanyang Ave, Singapore 639798, Singapore
[2] Univ Elect Sci & Technol China, Chengdu, Peoples R China
来源
COMPUTER VISION - ECCV 2020, PT XV | 2020年 / 12360卷
关键词
Semantic segmentation; Unsupervised domain adaptation; Contextual-relation consistent;
D O I
10.1007/978-3-030-58555-6_42
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in unsupervised domain adaptation for semantic segmentation have shown great potentials to relieve the demand of expensive per-pixel annotations. However, most existing works address the domain discrepancy by aligning the data distributions of two domains at a global image level whereas the local consistencies are largely neglected. This paper presents an innovative local contextual-relation consistent domain adaptation (CrCDA) technique that aims to achieve local-level consistencies during the global-level alignment. The idea is to take a closer look at region-wise feature representations and align them for local-level consistencies. Specifically, CrCDA learns and enforces the prototypical local contextual-relations explicitly in the feature space of a labelled source domain while transferring them to an unlabelled target domain via backpropagation-based adversarial learning. An adaptive entropy max-min adversarial learning scheme is designed to optimally align these hundreds of local contextual-relations across domain without requiring discriminator or extra computation overhead. The proposed CrCDA has been evaluated extensively over two challenging domain adaptive segmentation tasks (e.g., GTA5 -> Cityscapes and SYNTHIA -> Cityscapes), and experiments demonstrate its superior segmentation performance as compared with state-of-the-art methods.
引用
收藏
页码:705 / 722
页数:18
相关论文
共 64 条
[31]   Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation [J].
Luo, Yawei ;
Zheng, Liang ;
Guan, Tao ;
Yu, Junqing ;
Yang, Yi .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2502-2511
[32]   Macro-Micro Adversarial Network for Human Parsing [J].
Luo, Yawei ;
Zheng, Zhedong ;
Zheng, Liang ;
Guan, Tao ;
Yu, Junqing ;
Yang, Yi .
COMPUTER VISION - ECCV 2018, PT IX, 2018, 11213 :424-440
[33]   Image to Image Translation for Domain Adaptation [J].
Murez, Zak ;
Kolouri, Soheil ;
Kriegman, David ;
Ramamoorthi, Ravi ;
Kim, Kyungnam .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4500-4509
[34]   Playing for Data: Ground Truth from Computer Games [J].
Richter, Stephan R. ;
Vineet, Vibhav ;
Roth, Stefan ;
Koltun, Vladlen .
COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 :102-118
[35]   The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes [J].
Ros, German ;
Sellart, Laura ;
Materzynska, Joanna ;
Vazquez, David ;
Lopez, Antonio M. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3234-3243
[36]  
Saito K, 2018, Arxiv, DOI arXiv:1711.01575
[37]   Semi-supervised Domain Adaptation via Minimax Entropy [J].
Saito, Kuniaki ;
Kim, Donghyun ;
Sclaroff, Stan ;
Darrell, Trevor ;
Saenko, Kate .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :8049-8057
[38]   Maximum Classifier Discrepancy for Unsupervised Domain Adaptation [J].
Saito, Kuniaki ;
Watanabe, Kohei ;
Ushiku, Yoshitaka ;
Harada, Tatsuya .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3723-3732
[39]   Open Set Domain Adaptation by Backpropagation [J].
Saito, Kuniaki ;
Yamamoto, Shohei ;
Ushiku, Yoshitaka ;
Harada, Tatsuya .
COMPUTER VISION - ECCV 2018, PT V, 2018, 11209 :156-171
[40]   Effective Use of Synthetic Data for Urban Scene Semantic Segmentation [J].
Saleh, Fatemeh Sadat ;
Aliakbarian, Mohammad Sadegh ;
Salzmann, Mathieu ;
Petersson, Lars ;
Alvarez, Jose M. .
COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 :86-103