RegionCLIP: Region-based Language-Image Pretraining

被引:297
作者
Zhong, Yiwu [1 ,2 ]
Yang, Jianwei [2 ]
Zhang, Pengchuan [2 ]
Li, Chunyuan [2 ]
Codella, Noel [3 ]
Li, Liunian Harold [4 ]
Zhou, Luowei [3 ]
Dai, Xiyang [3 ]
Yuan, Lu [3 ]
Li, Yin [1 ]
Gao, Jianfeng [2 ]
机构
[1] Univ Wisconsin, Madison, WI 53706 USA
[2] Microsoft Res, Redmond, WA USA
[3] Microsoft Cloud AI, Redmond, WA USA
[4] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.01629
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contrastive language-image pretraining (CLIP) using image-text pairs has achieved impressive results on image classification in both zero-shot and transfer learning settings. However, we show that directly applying such models to recognize image regions for object detection leads to unsatisfactory performance due to a major domain shift: CLIP was trained to match an image as a whole to a text description, without capturing the fine-grained alignment between image regions and text spans. To mitigate this issue, we propose a new method called RegionCLIP that significantly extends CLIP to learn region-level visual representations, thus enabling fine-grained alignment between image regions and textual concepts. Our method leverages a CLIP model to match image regions with template captions, and then pretrains our model to align these region-text pairs in the feature space. When transferring our pretrained model to the open-vocabulary object detection task, our method outperforms the state of the art by 3.8 AP50 and 2.2 AP for novel categories on COCO and LVIS datasets, respectively. Further, the learned region representations support zero-shot inference for object detection, showing promising results on both COCO and LVIS datasets. Our code is available at https://github.com/microsoft/RegionCLIP.
引用
收藏
页码:16772 / 16782
页数:11
相关论文
共 66 条
[1]   Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering [J].
Anderson, Peter ;
He, Xiaodong ;
Buehler, Chris ;
Teney, Damien ;
Johnson, Mark ;
Gould, Stephen ;
Zhang, Lei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6077-6086
[2]  
[Anonymous], 2020, P IEEE C COMP VIS PA, DOI DOI 10.1109/BIBM49941.2020.9313406
[3]  
[Anonymous], 2015, Microsoft COCO captions: Data collection and evaluation server
[4]  
[Anonymous], 2020, ARXIV200607733
[5]   Zero-Shot Object Detection [J].
Bansal, Ankan ;
Sikka, Karan ;
Sharma, Gaurav ;
Chellappa, Rama ;
Divakaran, Ajay .
COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 :397-414
[6]   Matching words and pictures [J].
Barnard, K ;
Duygulu, P ;
Forsyth, D ;
de Freitas, N ;
Blei, DM ;
Jordan, MI .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (06) :1107-1135
[7]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[8]  
Caron M, 2020, ADV NEUR IN, V33
[9]  
Chen T., 2020, ICML
[10]   Webly Supervised Learning of Convolutional Networks [J].
Chen, Xinlei ;
Gupta, Abhinav .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1431-1439