SeMask: Semantically Masked Transformers for Semantic Segmentation

被引:37
作者
Jain, Jitesh [1 ,2 ,3 ,4 ]
Singh, Anukriti [1 ,2 ,3 ,4 ]
Orlov, Nikita [5 ]
Huang, Zilong [1 ,2 ,3 ,4 ]
Li, Jiachen [1 ,2 ,3 ,4 ]
Walton, Steven [1 ,2 ,3 ,4 ]
Shi, Humphrey [1 ,2 ,3 ,4 ,5 ]
机构
[1] SHI Labs, Somerset, NJ USA
[2] Georgia Tech, Atlanta, GA USA
[3] UIUC, Champaign, IL USA
[4] Oregon, Corvallis, OR USA
[5] Picsart AI Res PAIR, Miami, FL USA
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW | 2023年
关键词
D O I
10.1109/ICCVW60793.2023.00083
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Finetuning a pretrained backbone in the encoder part of an image transformer network has been the traditional approach for the semantic segmentation task. However, such an approach leaves out the semantic context that an image provides during the encoding stage. This paper argues that incorporating semantic information of the image into pretrained hierarchical transformer-based backbones while finetuning improves the performance considerably. To achieve this, we propose SeMask, a simple and effective framework that incorporates semantic information into the encoder with the help of a semantic attention operation. In addition, we use a lightweight semantic decoder during training to provide supervision to the intermediate semantic prior maps at every stage. Our experiments demonstrate that incorporating semantic priors enhances the performance of the established hierarchical encoders with a slight increase in the number of FLOPs. We provide empirical proof by integrating SeMask into Swin Transformer and Mix Transformer backbones as our encoder paired with different decoders. Our framework achieves impressive performance of 58.25% mIoU on the ADE20K dataset with SeMask Swin-L backbone and improvements of over 3% in the mIoU metric on the Cityscapes dataset. The code is publicly available on https://github.com/Picsart-AI-Research/SeMask-Segmentation.
引用
收藏
页码:752 / 761
页数:10
相关论文
共 53 条
[1]  
[Anonymous], 2017, P IEEE C COMP VIS PA
[2]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[3]  
Bao H., 2021, P ICLR, P1
[4]  
Bao HB, 2020, PR MACH LEARN RES, V119
[5]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[6]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[7]  
Chen Liang-Chieh, 2015, ICLR, V2
[8]   Masked-attention Mask Transformer for Universal Image Segmentation [J].
Cheng, Bowen ;
Misra, Ishan ;
Schwing, Alexander G. ;
Kirillov, Alexander ;
Girdhar, Rohit .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :1280-1289
[9]  
Cheng Bowen, 2021, NeurIPS, V3, P7
[10]  
Chu X., 2021, ARXIV