LibFewShot: A Comprehensive Library for Few-Shot Learning

被引:34
作者
Li, Wenbin [1 ]
Wang, Ziyi [1 ]
Yang, Xuesong [1 ]
Dong, Chuanqi [1 ]
Tian, Pinzhuo [2 ]
Qin, Tiexin [3 ]
Huo, Jing [1 ]
Shi, Yinghuan [1 ]
Wang, Lei [4 ]
Gao, Yang [1 ]
Luo, Jiebo [5 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
[2] Shanghai Univ, Sch Comp Engn & Sci, Shanghai 200444, Peoples R China
[3] City Univ Hong Kong, Dept Elect Engn, Kowloon Tong, Hong Kong, Peoples R China
[4] Univ Wollongong, Sch Comp & Informat Technol, Wollongong, NSW 2522, Australia
[5] Univ Rochester, Dept Comp Sci, Rochester, NY 14627 USA
基金
中国国家自然科学基金;
关键词
Fair comparison; few-shot learning; image classification; unified framework; META; MODEL;
D O I
10.1109/TPAMI.2023.3312125
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning, especially few-shot image classification, has received increasing attention and witnessed significant advances in recent years. Some recent studies implicitly show that many generic techniques or "tricks", such as data augmentation, pre-training, knowledge distillation, and self-supervision, may greatly boost the performance of a few-shot learning method. Moreover, different works may employ different software platforms, backbone architectures and input image sizes, making fair comparisons difficult and practitioners struggle with reproducibility. To address these situations, we propose a comprehensive library for few-shot learning (LibFewShot) by re-implementing eighteen state-of-the-art few-shot learning methods in a unified framework with the same single codebase in PyTorch. Furthermore, based on LibFewShot, we provide comprehensive evaluations on multiple benchmarks with various backbone architectures to evaluate common pitfalls and effects of different training tricks. In addition, with respect to the recent doubts on the necessity of meta- or episodic-training mechanism, our evaluation results confirm that such a mechanism is still necessary especially when combined with pre-training. We hope our work can not only lower the barriers for beginners to enter the area of few-shot learning but also elucidate the effects of nontrivial tricks to facilitate intrinsic research on few-shot learning.
引用
收藏
页码:14938 / 14955
页数:18
相关论文
共 69 条
[1]  
Abbas M, 2022, PR MACH LEARN RES, P10
[2]   Matching Feature Sets for Few-Shot Image Classification [J].
Afrasiyabi, Arman ;
Larochelle, Hugo ;
Lalonde, Jean-Francois ;
Gagne, Christian .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :9004-9014
[3]  
Bertinetto L, 2018, ARXIV180508136
[4]  
Cao Yuan, 2022, arXiv
[5]  
Chen Y, 2020, ARXIV
[6]  
Dhillon G. S., 2020, 8 INT C LEARNING REP
[7]  
Doersch C., 2020, NeurIPS, V33, P21981
[8]  
Dosovitskiy A., 2021, P INT C LEARN REPR, DOI DOI 10.48550/ARXIV.2010.11929
[9]  
Dosovitskiy A, 2014, ADV NEUR IN, V27
[10]  
Finn C, 2017, PR MACH LEARN RES, V70