Panoptic-PartFormer: Learning a Unified Model for Panoptic Part Segmentation

被引:17
作者
Li, Xiangtai [1 ]
Xu, Shilin [1 ]
Yang, Yibo [1 ]
Cheng, Guangliang [2 ]
Tong, Yunhai [1 ]
Tao, Dacheng [3 ]
机构
[1] Peking Univ, Key Lab Machine Percept, MOE, Sch Artificial Intelligence, Beijing, Peoples R China
[2] SenseTime Res, Hong Kong, Peoples R China
[3] JD Explore Acad, Beijing, Peoples R China
来源
COMPUTER VISION - ECCV 2022, PT XXVII | 2022年 / 13687卷
关键词
Panoptic Part Segmentation; Scene understanding; Vision Transformer;
D O I
10.1007/978-3-031-19812-0_42
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Panoptic Part Segmentation (PPS) aims to unify panoptic segmentation and part segmentation into one task. Previous work mainly utilizes separated approaches to handle thing, stuff, and part predictions individually without performing any shared computation and task association. In this work, we aim to unify these tasks at the architectural level, designing the first end-to-end unified method named Panoptic-PartFormer. In particular, motivated by the recent progress in Vision Transformer, we model things, stuff, and part as object queries and directly learn to optimize the all three predictions as unified mask prediction and classification problem. We design a decoupled decoder to generate part feature and thing/stuff feature respectively. Then we propose to utilize all the queries and corresponding features to perform reasoning jointly and iteratively. The final mask can be obtained via inner product between queries and the corresponding features. The extensive ablation studies and analysis prove the effectiveness of our framework. Our Panoptic-PartFormer achieves the new state-of-the-art results on both Cityscapes PPS and Pascal Context PPS datasets with around 70% GFlops and 50% parameters decrease. Given its effectiveness and conceptual simplicity, we hope the Panoptic-PartFormer can serve as a strong baseline and aid future research in PPS. Our code and models will be available at https://github.com/lxtGH/Panoptic-PartFormer.
引用
收藏
页码:729 / 747
页数:19
相关论文
共 73 条
  • [1] Carion N., 2020, EUROPEAN C COMPUTER, V12346, P213, DOI 10.1007/978-3-030-58452-8_13
  • [2] Chen LC, 2017, Arxiv, DOI [arXiv:1706.05587, 10.48550/arxiv.1706.05587.ArXiv, DOI 10.48550/ARXIV.1706.05587.ARXIV]
  • [3] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
    Chen, Liang-Chieh
    Zhu, Yukun
    Papandreou, George
    Schroff, Florian
    Adam, Hartwig
    [J]. COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 833 - 851
  • [4] BANet: Bidirectional Aggregation Network with Occlusion Handling for Panoptic Segmentation
    Chen, Yifeng
    Lin, Guangchen
    Li, Songyuan
    Bourahla, Omar
    Wu, Yiming
    Wang, Fangfang
    Feng, Junyi
    Xu, Mingliang
    Li, Xi
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3792 - 3801
  • [5] Cheng B, 2021, ADV NEUR IN, V34
  • [6] Cheng BW, 2020, PROC CVPR IEEE, P12472, DOI 10.1109/CVPR42600.2020.01249
  • [7] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [8] The Cityscapes Dataset for Semantic Urban Scene Understanding
    Cordts, Marius
    Omran, Mohamed
    Ramos, Sebastian
    Rehfeld, Timo
    Enzweiler, Markus
    Benenson, Rodrigo
    Franke, Uwe
    Roth, Stefan
    Schiele, Bernt
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 3213 - 3223
  • [9] Part-aware Panoptic Segmentation
    de Geus, Daan
    Meletis, Panagiotis
    Lu, Chenyang
    Wen, Xiaoxiao
    Dubbelman, Gijs
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5481 - 5490
  • [10] Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]