Panoptic-PartFormer: Learning a Unified Model for Panoptic Part Segmentation

被引:20
作者
Li, Xiangtai [1 ]
Xu, Shilin [1 ]
Yang, Yibo [1 ]
Cheng, Guangliang [2 ]
Tong, Yunhai [1 ]
Tao, Dacheng [3 ]
机构
[1] Peking Univ, Key Lab Machine Percept, MOE, Sch Artificial Intelligence, Beijing, Peoples R China
[2] SenseTime Res, Hong Kong, Peoples R China
[3] JD Explore Acad, Beijing, Peoples R China
来源
COMPUTER VISION - ECCV 2022, PT XXVII | 2022年 / 13687卷
关键词
Panoptic Part Segmentation; Scene understanding; Vision Transformer;
D O I
10.1007/978-3-031-19812-0_42
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Panoptic Part Segmentation (PPS) aims to unify panoptic segmentation and part segmentation into one task. Previous work mainly utilizes separated approaches to handle thing, stuff, and part predictions individually without performing any shared computation and task association. In this work, we aim to unify these tasks at the architectural level, designing the first end-to-end unified method named Panoptic-PartFormer. In particular, motivated by the recent progress in Vision Transformer, we model things, stuff, and part as object queries and directly learn to optimize the all three predictions as unified mask prediction and classification problem. We design a decoupled decoder to generate part feature and thing/stuff feature respectively. Then we propose to utilize all the queries and corresponding features to perform reasoning jointly and iteratively. The final mask can be obtained via inner product between queries and the corresponding features. The extensive ablation studies and analysis prove the effectiveness of our framework. Our Panoptic-PartFormer achieves the new state-of-the-art results on both Cityscapes PPS and Pascal Context PPS datasets with around 70% GFlops and 50% parameters decrease. Given its effectiveness and conceptual simplicity, we hope the Panoptic-PartFormer can serve as a strong baseline and aid future research in PPS. Our code and models will be available at https://github.com/lxtGH/Panoptic-PartFormer.
引用
收藏
页码:729 / 747
页数:19
相关论文
共 73 条
[1]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[2]  
Chen LC, 2017, Arxiv, DOI [arXiv:1706.05587, DOI 10.48550/ARXIV.1706.05587]
[3]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[4]   BANet: Bidirectional Aggregation Network with Occlusion Handling for Panoptic Segmentation [J].
Chen, Yifeng ;
Lin, Guangchen ;
Li, Songyuan ;
Bourahla, Omar ;
Wu, Yiming ;
Wang, Fangfang ;
Feng, Junyi ;
Xu, Mingliang ;
Li, Xi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3792-3801
[5]  
Cheng B, 2021, ADV NEUR IN, V34
[6]   Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation [J].
Cheng, Bowen ;
Collins, Maxwell D. ;
Zhu, Yukun ;
Liu, Ting ;
Huang, Thomas S. ;
Adam, Hartwig ;
Chen, Liang-Chieh .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :12472-12482
[7]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[8]   The Cityscapes Dataset for Semantic Urban Scene Understanding [J].
Cordts, Marius ;
Omran, Mohamed ;
Ramos, Sebastian ;
Rehfeld, Timo ;
Enzweiler, Markus ;
Benenson, Rodrigo ;
Franke, Uwe ;
Roth, Stefan ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3213-3223
[9]   Part-aware Panoptic Segmentation [J].
de Geus, Daan ;
Meletis, Panagiotis ;
Lu, Chenyang ;
Wen, Xiaoxiao ;
Dubbelman, Gijs .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :5481-5490
[10]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]