HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation

被引:24
作者
Ding, Jian [1 ,2 ,3 ]
Xue, Nan [1 ]
Xia, Gui-Song [1 ,2 ]
Schiele, Bernt [3 ]
Dai, Dengxin [3 ]
机构
[1] Wuhan Univ, Sch Comp Sci, NERCMS, Wuhan, Peoples R China
[2] Wuhan Univ, State Key Lab LIESMARS, Wuhan, Peoples R China
[3] Max Planck Inst Informat, Saarland Informat Campus, Saarbrucken, Germany
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.01479
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current semantic segmentation models have achieved great success under the independent and identically distributed (i.i.d.) condition. However, in real-world applications, test data might come from a different domain than training data. Therefore, it is important to improve model robustness against domain differences. This work studies semantic segmentation under the domain generalization setting, where a model is trained only on the source domain and tested on the unseen target domain. Existing works show that Vision Transformers are more robust than CNNs and show that this is related to the visual grouping property of self-attention. In this work, we propose a novel hierarchical grouping transformer (HGFormer) to explicitly group pixels to form part-level masks and then whole-level masks. The masks at different scales aim to segment out both parts and a whole of classes. HGFormer combines mask classification results at both scales for class label prediction. We assemble multiple interesting cross-domain settings by using seven public semantic segmentation datasets. Experiments show that HGFormer yields more robust semantic segmentation results than per-pixel classification methods and flat-grouping transformers, and outperforms previous methods significantly. Code will be available at https: //github.com/dingjiansw101/HGFormer.
引用
收藏
页码:15413 / 15423
页数:11
相关论文
共 69 条
[41]   Switchable Whitening for Deep Representation Learning [J].
Pan, Xingang ;
Zhan, Xiaohang ;
Shi, Jianping ;
Tang, Xiaoou ;
Luo, Ping .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :1863-1871
[42]   Two at Once: Enhancing Learning and Generalization Capacities via IBN-Net [J].
Pan, Xingang ;
Luo, Ping ;
Shi, Jianping ;
Tang, Xiaoou .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :484-500
[43]  
Paul S, 2022, AAAI CONF ARTIF INTE, P2071
[44]   Semantic-Aware Domain Generalized Segmentation [J].
Peng, Duo ;
Lei, Yinjie ;
Hayat, Munawar ;
Guo, Yulan ;
Li, Wen .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :2584-2595
[45]   Global and Local Texture Randomization for Synthetic-to-Real Semantic Segmentation [J].
Peng, Duo ;
Lei, Yinjie ;
Liu, Lingqiao ;
Zhang, Pingping ;
Liu, Jun .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 (30) :6594-6608
[46]  
Rame Alexandre, 2022, ARXIV220509739
[47]   Playing for Data: Ground Truth from Computer Games [J].
Richter, Stephan R. ;
Vineet, Vibhav ;
Roth, Stefan ;
Koltun, Vladlen .
COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 :102-118
[48]  
Ros G., 2016, Proceedings of the IEEE conference on computer vision and pattern recognition, P3234, DOI DOI 10.1109/CVPR.2016.352
[49]   ACDC: The Adverse Conditions Dataset with Correspondences for Semantic Driving Scene Understanding [J].
Sakaridis, Christos ;
Dai, Dengxin ;
Van Gool, Luc .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :10745-10755
[50]   Image parsing: Unifying segmentation, detection, and recognition [J].
Tu, ZW ;
Chen, XG ;
Yuille, AL ;
Zhu, SC .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2005, 63 (02) :113-140