Prompting-to-Distill Semantic Knowledge for Few-Shot Learning

被引:0
|
作者
Ji, Hong [1 ]
Gao, Zhi [1 ,2 ]
Ren, Jinchang [3 ]
Wang, Xing-ao [1 ]
Gao, Tianyi [1 ]
Sun, Wenbo [1 ]
Ma, Ping
机构
[1] Wuhan Univ, Sch Remote Sensing Informat Engn, Wuhan 430079, Peoples R China
[2] Hubei Luojia Lab, Wuhan 430079, Peoples R China
[3] Robert Gordon Univ, Natl Subsea Ctr, Aberdeen AB21 0BH, Scotland
基金
中国国家自然科学基金;
关键词
Attention mechanism; ChatGPT; CLIP; few-shot learning (FSL); multimodal knowledge;
D O I
10.1109/LGRS.2024.3414505
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Recognizing visual patterns in low-data regime necessitates deep neural networks to glean generalized representations from limited training samples. In this letter, we propose a novel few-shot classification method, namely ProDFSL, leveraging multimodal knowledge and attention mechanism. We are inspired by recent advances of large language models and the great potential they have shown across a wide range of downstream tasks and tailor it to benefit the remote sensing community. We utilize ChatGPT to produce class-specific textual inputs for enabling CLIP with rich semantic information. To promote the adaptation of CLIP in remote sensing domain, we introduce a cross-modal knowledge generation module, which dynamically generates a group of soft prompts conditioned on the few-shot visual samples and further uses a shallow Transformer to model the dependencies between language sequences. Fusing the semantic information with few-shot visual samples, we build representative class prototypes, which are conducive to both inductive and transductive inference. In extensive experiments on standard benchmarks, our ProDFSL consistently outperforms the state of the art in few-shot learning (FSL).
引用
收藏
页数:5
相关论文
共 50 条
  • [41] HybridPrompt: Domain-Aware Prompting for Cross-Domain Few-Shot Learning
    Wu, Jiamin
    Zhang, Tianzhu
    Zhang, Yongdong
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (12) : 5681 - 5697
  • [42] Defensive Few-Shot Learning
    Li, Wenbin
    Wang, Lei
    Zhang, Xingxing
    Qi, Lei
    Huo, Jing
    Gao, Yang
    Luo, Jiebo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (05) : 5649 - 5667
  • [43] Federated Few-shot Learning
    Wang, Song
    Fu, Xingbo
    Ding, Kaize
    Chen, Chen
    Chen, Huiyuan
    Li, Jundong
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 2374 - 2385
  • [44] CLG: Contrastive Label Generation with Knowledge for Few-Shot Learning
    Ma, Han
    Fan, Baoyu
    Ng, Benjamin K.
    Lam, Chan-Tong
    MATHEMATICS, 2024, 12 (03)
  • [45] Learning self-target knowledge for few-shot segmentation
    Chen, Yadang
    Chen, Sihan
    Yang, Zhi-Xin
    Wu, Enhua
    PATTERN RECOGNITION, 2024, 149
  • [46] Knowledge transduction for cross-domain few-shot learning
    Li, Pengfang
    Liu, Fang
    Jiao, Licheng
    Li, Shuo
    Li, Lingling
    Liu, Xu
    Huang, Xinyan
    PATTERN RECOGNITION, 2023, 141
  • [47] Combat data shift in few-shot learning with knowledge graph
    Zhu, Yongchun
    Zhuang, Fuzhen
    Zhang, Xiangliang
    Qi, Zhiyuan
    Shi, Zhiping
    Cao, Juan
    He, Qing
    FRONTIERS OF COMPUTER SCIENCE, 2023, 17 (01)
  • [48] BayesKGR: Bayesian Few-Shot Learning for Knowledge Graph Reasoning
    Zhao, Feng
    Yan, Cheng
    Jin, Hai
    He, Lifang
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2023, 22 (06)
  • [49] Combat data shift in few-shot learning with knowledge graph
    Yongchun Zhu
    Fuzhen Zhuang
    Xiangliang Zhang
    Zhiyuan Qi
    Zhiping Shi
    Juan Cao
    Qing He
    Frontiers of Computer Science, 2023, 17
  • [50] Knowledge Guided Metric Learning for Few-Shot Text Classification
    Sui, Dianbo
    Chen, Yubo
    Mao, Binjie
    Qiu, Delai
    Liu, Kang
    Zhao, Jun
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3266 - 3271