Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP

被引:121
作者
Liang, Feng [1 ]
Wu, Bichen [2 ]
Dai, Xiaoliang [2 ]
Li, Kunpeng [2 ]
Zhao, Yinan [2 ]
Zhang, Hang [3 ]
Zhang, Peizhao [2 ]
Vajda, Peter [2 ]
Marculescu, Diana [1 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] Meta Real Labs, Burlingame, CA USA
[3] Cruise, Hong Kong, Peoples R China
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR | 2023年
关键词
D O I
10.1109/CVPR52729.2023.00682
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Open-vocabulary semantic segmentation aims to segment an image into semantic regions according to text descriptions, which may not have been seen during training. Recent two-stage methods first generate class-agnostic mask proposals and then leverage pre-trained vision-language models, e.g., CLIP, to classify masked regions. We identify the performance bottleneck of this paradigm to be the pre-trained CLIP model, since it does not perform well on masked images. To address this, we propose to finetune CLIP on a collection of masked image regions and their corresponding text descriptions. We collect training data by mining an existing image-caption dataset (e.g., COCO Captions), using CLIP to match masked image regions to nouns in the image captions. Compared with the more precise and manually annotated segmentation labels with fixed classes (e.g., COCO-Stuff), we find our noisy but diverse dataset can better retain CLIP's generalization ability. Along with finetuning the entire model, we utilize the "blank" areas in masked images using a method we dub mask prompt tuning. Experiments demonstrate mask prompt tuning brings significant improvement without modifying any weights of CLIP, and it can further improve a fully finetuned model. In particular, when trained on COCO and evaluated on ADE20K-150, our best model achieves 29.6% mIoU, which is +8.5% higher than the previous state-of-the-art. For the first time, open-vocabulary generalist models match the performance of supervised specialist models in 2017 without dataset specific adaptations.
引用
收藏
页码:7061 / 7070
页数:10
相关论文
共 45 条
[1]  
[Anonymous], 2015, FOREIGN LANGUAGES TH
[2]  
[Anonymous], 2021, PMLR
[3]  
[Anonymous], 2022, P IEEE CVF C COMP VI
[4]  
[Anonymous], 2021, PMLR
[5]   RECOGNITION-BY-COMPONENTS - A THEORY OF HUMAN IMAGE UNDERSTANDING [J].
BIEDERMAN, I .
PSYCHOLOGICAL REVIEW, 1987, 94 (02) :115-147
[6]  
Bird S., 2009, NATURAL LANGUAGE PRO
[7]  
Bucher M, 2019, ADV NEUR IN, V32
[8]   COCO-Stuff: Thing and Stuff Classes in Context [J].
Caesar, Holger ;
Uijlings, Jasper ;
Ferrari, Vittorio .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1209-1218
[9]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[10]  
Cheng B, 2021, ADV NEUR IN, V34