Learning to Learn Task Transformations for Improved Few-Shot Classification

被引:0
作者
Zheng, Guangtao [1 ]
Suo, Qiuling [2 ]
Huai, Mengdi [3 ]
Zhang, Aidong [1 ]
机构
[1] Univ Virginia, Dept Comp Sci, Charlottesville, VA 22904 USA
[2] Univ Buffalo, Dept Comp Sci & Engn, Buffalo, NY USA
[3] Iowa State Univ, Dept Comp Sci, Ames, IA USA
来源
PROCEEDINGS OF THE 2023 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM | 2023年
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Meta-learning has shown great promise in few-shot image classification where only a small amount of labeled data is available in each classification task. Many training tasks are provided to train a meta-model that can quickly learn new and similar concepts with few labeled samples. Data augmentation is often used to augment training tasks to avoid overfitting. However, existing data augmentation methods are often manually designed and fixed during training, ignoring training dynamics and the di.erence between various meta-learning settings specified by meta-model architectures and meta-learning algorithms. To address this problem, we add a task transformation layer between a training task and a meta-model such that the right amount of perturbation is added to training tasks for a certain meta-learning setting at a certain training stage. By jointly optimizing the task transformation layer and the meta-model, we avoid the risk of providing tasks that are either too easy or too difficult during training. We design the task transformation layer as a stochastic transformation function, adding the flexibility in how a training task can be transformed. We leverage di.erentiable data augmentations as the building blocks of the task transformation function for efficient optimization. Extensive experiments show that our method can consistently improve the few-shot generalization performance of various meta-models trained with di.erent meta-learning algorithms, meta-model architectures, and datasets.
引用
收藏
页码:784 / 792
页数:9
相关论文
共 50 条
[1]   Learning to Learn Task-Adaptive Hyperparameters for Few-Shot Learning [J].
Baik, Sungyong ;
Choi, Myungsub ;
Choi, Janghoon ;
Kim, Heewon ;
Lee, Kyoung Mu .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) :1441-1454
[2]   SIM: an improved few-shot image classification model with multi-task learning [J].
Guo, Jin ;
Li, Wengen ;
Guan, Jihong ;
Gao, Hang ;
Liu, Baobo ;
Gong, Lili .
JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (03)
[3]   Multi-task classification network for few-shot learning [J].
Ji, Zhong ;
Liu, Yuanheng ;
Wang, Xuan ;
Liu, Jingren ;
Cao, Jiale ;
Yu, Yunlong .
INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2025, 14 (01)
[4]   Learning Hierarchical Task Structures for Few-shot Graph Classification [J].
Wang, Song ;
Dong, Yushun ;
Huang, Xiao ;
Chen, Chen ;
Li, Jundong .
ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (03)
[5]   Improved Few-Shot Visual Classification [J].
Bateni, Peyman ;
Goyal, Raghav ;
Masrani, Vaden ;
Wood, Frank ;
Sigal, Leonid .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :14481-14490
[6]   Learning to learn for few-shot continual active learning [J].
Ho, Stella ;
Liu, Ming ;
Gao, Shang ;
Gao, Longxiang .
ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (10)
[7]   Cross Dense Feature Learning With Task Guidance for Few-Shot Classification [J].
Zhang, Qi ;
Chen, Long ;
Shang, Wanfeng .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (04) :3438-3449
[8]   Task-aware prototype refinement for improved few-shot learning [J].
Wei Zhang ;
Xiaodong Gu .
Neural Computing and Applications, 2023, 35 :17899-17913
[9]   Few-Shot Classification with Contrastive Learning [J].
Yang, Zhanyuan ;
Wang, Jinghua ;
Zhu, Yingying .
COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 :293-309
[10]   Few-shot classification with task-adaptive semantic feature learning [J].
Pan, Mei-Hong ;
Xin, Hong-Yi ;
Xia, Chun-Qiu ;
Shen, Hong -Bin .
PATTERN RECOGNITION, 2023, 141