MaPLe: Multi-modal Prompt Learning

被引:314
作者
Khattak, Muhammad Uzair [1 ]
Rasheed, Hanoona [1 ]
Maaz, Muhammad [1 ]
Khan, Salman [1 ,2 ]
Khan, Fahad Shahbaz [1 ,3 ]
机构
[1] Mohamed bin Zayed Univ AI, Abu Dhabi, U Arab Emirates
[2] Australian Natl Univ, Canberra, ACT, Australia
[3] Linkoping Univ, Linkoping, Sweden
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.01832
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to ne-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the exibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.
引用
收藏
页码:19113 / 19122
页数:10
相关论文
共 51 条
[1]  
[Anonymous], 2019, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, DOI [DOI 10.1109/JSTARS.2019.2918242, 10.1109/JSTARS.2019.2918242]
[2]  
[Anonymous], 2012, CVPR
[3]  
Bahng Hyojin, 2022, ARXIV220317274
[4]  
Bossard L, 2014, LECT NOTES COMPUT SC, V8694, P446, DOI 10.1007/978-3-319-10599-4_29
[5]   Describing Textures in the Wild [J].
Cimpoi, Mircea ;
Maji, Subhransu ;
Kokkinos, Iasonas ;
Mohamed, Sammy ;
Vedaldi, Andrea .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :3606-3613
[6]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[7]  
Ding N., 2022, P IEEECVF C COMPUTER, P11583
[8]  
Dosovitskiy A., 2020, ICLR 2021
[9]  
Feng Chengjian, 2022, EUR C COMP VIS
[10]  
Gao Peng, 2021, ARXIV211004544