Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model

被引:17
作者
Xing, Yinghui [1 ,2 ]
Wu, Qirui [1 ]
Cheng, De [3 ]
Zhang, Shizhou [1 ]
Liang, Guoqiang [1 ]
Wang, Peng [1 ]
Zhang, Yanning [1 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Peoples R China
[2] Northwestern Polytech Univ Shenzhen, Res Dev Inst, Shenzhen 518057, Peoples R China
[3] Xidian Univ, Sch Telecommun Engn, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Visualization; Tuning; Task analysis; Adaptation models; Computational modeling; Feature extraction; Training; Few-shot learning; transfer learning; image classification; prompt tuning; vision-language model;
D O I
10.1109/TMM.2023.3291588
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the emergence of large pretrained vison-language models such as CLIP, transferable representations can be adapted to a wide range of downstream tasks via prompt tuning. Prompt tuning probes for beneficial information for downstream tasks from the general knowledge stored in the pretrained model. A recently proposed method named Context Optimization (CoOp) introduces a set of learnable vectors as text prompts from the language side. However, tuning the text prompt alone can only adjust the synthesized "classifier", while the computed visual features of the image encoder cannot be affected, thus leading to suboptimal solutions. In this article, we propose a novel dual-modality prompt tuning (DPT) paradigm through learning text and visual prompts simultaneously. To make the final image feature concentrate more on the target visual concept, a class-aware visual prompt tuning (CAVPT) scheme is further proposed in our DPT. In this scheme, the class-aware visual prompt is generated dynamically by performing the cross attention between text prompt features and image patch token embeddings to encode both the downstream task-related information and visual instance information. Extensive experimental results on 11 datasets demonstrate the effectiveness and generalization ability of the proposed method.
引用
收藏
页码:2056 / 2068
页数:13
相关论文
共 66 条
  • [1] Bahng H, 2022, Arxiv, DOI arXiv:2203.17274
  • [2] Bossard L, 2014, LECT NOTES COMPUT SC, V8694, P446, DOI 10.1007/978-3-319-10599-4_29
  • [3] Cai H, 2020, ADV NEUR IN, V33
  • [4] Graph Neural Networks With Triple Attention for Few-Shot Learning
    Cheng, Hao
    Zhou, Joey Tianyi
    Tay, Wee Peng
    Wen, Bihan
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8225 - 8239
  • [5] Describing Textures in the Wild
    Cimpoi, Mircea
    Maji, Subhransu
    Kokkinos, Iasonas
    Mohamed, Sammy
    Vedaldi, Andrea
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 3606 - 3613
  • [6] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [7] VirTex: Learning Visual Representations from Textual Annotations
    Desai, Karan
    Johnson, Justin
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 11157 - 11168
  • [8] Dosovitskiy A., 2021, PROC ICLR, P1
  • [9] Fang P., 2022, IEEE Trans. Multimedia, earlyaccess, DOI [10.1109/TMM.2022.3227416.[17]J., DOI 10.1109/TMM.2022.3227416.[17]J]
  • [10] Gao Peng, 2021, CoRR abs/2110.04544