HCPNet: Learning discriminative prototypes for few-shot remote sensing image scene classification

被引:12
作者
Zhu, Junjie [1 ]
Yang, Ke [1 ]
Guan, Naiyang [1 ]
Yi, Xiaodong [1 ]
Qiu, Chunping [1 ]
机构
[1] Acad Mil Sci, Def Innovat Inst, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Satellite imaging; Few-shot classification; Meta-learning; Contrastive learning; Few-shot learning;
D O I
10.1016/j.jag.2023.103447
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Few-shot learning is an important and challenging research topic for remote sensing image scene classification. Many existing approaches address this challenge by using meta-learning and metric-learning techniques, which aim to develop feature extractors that can quickly adapt to new tasks with limited labeled data. However, these methods are unsuitable for real-world datasets that have class confusion, with high inter-class similarity and intra-class diversity. To overcome this limitation, we propose a novel and effective approach to learn query-specific prototype boundaries for few-shot remote sensing scene classification (FS-RSSC). Our approach consists of two key components: (1) a query-specific prototype representation that incorporates the query feature as a key factor in the prototype formation, in contrast to conventional methods that only use the query for model prediction; and (2) a prototypical regularization that enhances the discriminativeness of the prototypes by maximizing their inter-class separation. We employ a contrastive learning framework to model both components of our approach and integrate meta-learning and contrastive learning to learn an optimal query-specific prototype representation initialization that generalizes well to new queries. We name our model the Hybrid Contrastive Prototypical Network (HCPNet). We evaluate the effectiveness of our proposed HCPNet on four popular datasets under two standard benchmarks, namely, general few-shot classification and few-shot domain generalization. Our experimental results demonstrate that our method outperforms the state-of-the-art methods on both benchmarks by a large margin.
引用
收藏
页数:10
相关论文
共 55 条
  • [1] Baoquan Zhang, 2021, Learn to abstract via concept graph for weakly-supervised few-shot learning, P117
  • [2] Chen T, 2020, PR MACH LEARN RES, V119
  • [3] Chen W. Y., 2019, PROC INT C LEARN REP
  • [4] Chen XY, 2019, ADV NEUR IN, V32
  • [5] Semi-Supervised Contrastive Learning for Few-Shot Segmentation of Remote Sensing Images
    Chen, Yadang
    Wei, Chenchen
    Wang, Duolin
    Ji, Chuanjun
    Li, Baozhu
    [J]. REMOTE SENSING, 2022, 14 (17)
  • [6] SPNet: Siamese-Prototype Network for Few-Shot Remote Sensing Image Scene Classification
    Cheng, Gong
    Cai, Liming
    Lang, Chunbo
    Yao, Xiwen
    Chen, Jinyong
    Guo, Lei
    Han, Junwei
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [7] Remote Sensing Image Scene Classification: Benchmark and State of the Art
    Cheng, Gong
    Han, Junwei
    Lu, Xiaoqiang
    [J]. PROCEEDINGS OF THE IEEE, 2017, 105 (10) : 1865 - 1883
  • [8] MKN: Metakernel Networks for Few Shot Remote Sensing Scene Classification
    Cui, Zhenqi
    Yang, Wang
    Chen, Li
    Li, Haifeng
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [9] Unsupervised Visual Representation Learning by Context Prediction
    Doersch, Carl
    Gupta, Abhinav
    Efros, Alexei A.
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1422 - 1430
  • [10] Finn C, 2017, PR MACH LEARN RES, V70