CORE: CORrelation-Guided Feature Enhancement for Few-Shot Image Classification

被引:5
作者
Xu, Jing [1 ]
Pan, Xinglin [1 ,2 ]
Wang, Jingquan
Pei, Wenjie [1 ]
Liao, Qing [1 ]
Xu, Zenglin [1 ,3 ]
机构
[1] Harbin Inst Technol, Sch Sci & Technol, Shenzhen 510085, Guangdong, Peoples R China
[2] Hong Kong Univ Sci & Technol, Guangzhou 511453, Guangdong, Peoples R China
[3] Peng Cheng Lab, Shenzhen 510855, Peoples R China
关键词
Feature extraction; Correlation; Training; Task analysis; Semantics; Image reconstruction; Decoding; Convolutional neural networks (CNNs); feature enhancement; few-shot learning (FSL); representation learning;
D O I
10.1109/TNNLS.2024.3355774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot classification aims to adapt classifiers trained on base classes to novel classes with a few shots. However, the limited amount of training data is often inadequate to represent the intraclass variations in novel classes. This can result in biased estimation of the feature distribution, which in turn results in inaccurate decision boundaries, especially when the support data are outliers. To address this issue, we propose a feature enhancement method called CORrelation-guided feature Enrichment that generates improved features for novel classes using weak supervision from the base classes. The proposed CORrelation-guided feature Enhancement (CORE) method utilizes an autoencoder (AE) architecture but incorporates classification information into its latent space. This design allows the CORE to generate more discriminative features while discarding irrelevant content information. After being trained on base classes, CORE's generative ability can be transferred to novel classes that are similar to those in the base classes. By using these generative features, we can reduce the estimation bias of the class distribution, which makes few-shot learning (FSL) less sensitive to the selection of support data. Our method is generic and flexible and can be used with any feature extractor and classifier. It can be easily integrated into existing FSL approaches. Experiments with different backbones and classifiers show that our proposed method consistently outperforms existing methods on various widely used benchmarks.
引用
收藏
页码:3098 / 3110
页数:13
相关论文
共 47 条
[1]  
[Anonymous], 2013, NeurIPS, DOI DOI 10.48550/ARXIV.1310.4546
[2]  
[Anonymous], 2021, Soc. B, Methodol., V26, P1
[3]   Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning [J].
Baik, Sungyong ;
Choi, Janghoon ;
Kim, Heewon ;
Cho, Dohee ;
Min, Jaesik ;
Lee, Kyoung Mu .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9445-9454
[4]   AN ANALYSIS OF TRANSFORMATIONS [J].
BOX, GEP ;
COX, DR .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 1964, 26 (02) :211-252
[5]  
Chen T, 2020, PR MACH LEARN RES, V119
[6]   Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [J].
Chen, Yinbo ;
Liu, Zhuang ;
Xu, Huijuan ;
Darrell, Trevor ;
Wang, Xiaolong .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9042-9051
[7]  
Chen Yize, 2019, ICLR
[8]   Multi-Level Semantic Feature Augmentation for One-Shot Learning [J].
Chen, Zitian ;
Fu, Yanwei ;
Zhang, Yinda ;
Jiang, Yu-Gang ;
Xue, Xiangyang ;
Sigal, Leonid .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (09) :4594-4605
[9]  
Chijiwa Daiki, 2022, ADV NEUR IN
[10]   A Two-Stage Approach to Few-Shot Learning for Image Recognition [J].
Das, Debasmit ;
Lee, C. S. George .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :3336-3350