HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation

被引:24
作者
Ding, Jian [1 ,2 ,3 ]
Xue, Nan [1 ]
Xia, Gui-Song [1 ,2 ]
Schiele, Bernt [3 ]
Dai, Dengxin [3 ]
机构
[1] Wuhan Univ, Sch Comp Sci, NERCMS, Wuhan, Peoples R China
[2] Wuhan Univ, State Key Lab LIESMARS, Wuhan, Peoples R China
[3] Max Planck Inst Informat, Saarland Informat Campus, Saarbrucken, Germany
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.01479
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current semantic segmentation models have achieved great success under the independent and identically distributed (i.i.d.) condition. However, in real-world applications, test data might come from a different domain than training data. Therefore, it is important to improve model robustness against domain differences. This work studies semantic segmentation under the domain generalization setting, where a model is trained only on the source domain and tested on the unseen target domain. Existing works show that Vision Transformers are more robust than CNNs and show that this is related to the visual grouping property of self-attention. In this work, we propose a novel hierarchical grouping transformer (HGFormer) to explicitly group pixels to form part-level masks and then whole-level masks. The masks at different scales aim to segment out both parts and a whole of classes. HGFormer combines mask classification results at both scales for class label prediction. We assemble multiple interesting cross-domain settings by using seven public semantic segmentation datasets. Experiments show that HGFormer yields more robust semantic segmentation results than per-pixel classification methods and flat-grouping transformers, and outperforms previous methods significantly. Code will be available at https: //github.com/dingjiansw101/HGFormer.
引用
收藏
页码:15413 / 15423
页数:11
相关论文
共 69 条
[21]   Dual Attention Network for Scene Segmentation [J].
Fu, Jun ;
Liu, Jing ;
Tian, Haijie ;
Li, Yong ;
Bao, Yongjun ;
Fang, Zhiwei ;
Lu, Hanqing .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3141-3149
[22]   Superpixel Convolutional Networks Using Bilateral Inceptions [J].
Gadde, Raghudeep ;
Jampani, Varun ;
Kiefel, Martin ;
Kappler, Daniel ;
Gehler, Peter V. .
COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 :597-613
[23]  
Guo Yiduo, 2023, CVPR
[24]  
He K, 2016, Proceedings of the IEEE conference on computer vision and pattern recognition, DOI [DOI 10.1109/CVPR.2016.90, 10.1109/CVPR.2016.90]
[25]   Momentum Contrast for Unsupervised Visual Representation Learning [J].
He, Kaiming ;
Fan, Haoqi ;
Wu, Yuxin ;
Xie, Saining ;
Girshick, Ross .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :9726-9735
[26]  
Hinton Geoffrey, 2021, arXiv
[27]  
Hoyer Lukas, 2022, CVPR
[28]   Superpixel Sampling Networks [J].
Jampani, Varun ;
Sun, Deqing ;
Liu, Ming-Yu ;
Yang, Ming-Hsuan ;
Kautz, Jan .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :363-380
[29]   Benchmarking the Robustness of Semantic Segmentation Models [J].
Kamann, Christoph ;
Rother, Carsten .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :8825-8835
[30]  
Lakshminarayanan Balaji, 2017, NeurIPS, V30