Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning

被引:26
作者
He, Yangji [1 ,2 ]
Liang, Weihan [1 ,2 ]
Zhao, Dongyang [1 ,2 ]
Zhou, Hong-Yu [3 ]
Ge, Weifeng [1 ,2 ]
Yu, Yizhou [3 ]
Zhang, Wenqiang [2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Nebula AI Grp, Shanghai, Peoples R China
[2] Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
[3] Univ Hong Kong, Hong Kong, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
D O I
10.1109/CVPR52688.2022.00891
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents new hierarchically cascaded transformers that can improve data efficiency through attribute surrogates learning and spectral tokens pooling. Vision transformers have recently been thought of as a promising alternative to convolutional neural networks for visual recognition. But when there is no sufficient data, it gets stuck in overfitting and shows inferior performance. To improve data efficiency, we propose hierarchically cascaded transformers that exploit intrinsic image structures through spectral tokens pooling and optimize the learnable parameters through latent attribute surrogates. The intrinsic image structure is utilized to reduce the ambiguity between foreground content and background noise by spectral tokens pooling. And the attribute surrogate learning scheme is designed to benefit from the rich visual information in image-label pairs instead of simple visual concepts assigned by their labels. Our Hierarchically Cascaded Transformers, called HCTransformers, is built upon a self-supervised learning framework DINO and is tested on several popular few-shot learning benchmarks. In the inductive setting, HCTransformers surpass the DINO baseline by a large margin of 9.7% 5-way 1-shot accuracy and 9.17% 5-way 5-shot accuracy on miniImageNet, which demonstrates HCTransformers are efficient to extract discriminative features. Also, HCTransformers show clear advantages over SOTA few-shot classification methods in both 5-way 1-shot and 5-way 5-shot settings on four popular benchmark datasets, including miniImageNet, tieredImageNet, FC100, and CIFAR-FS. The trained weights and codes are available at https://github.com/StomachCold/HCTransformers.
引用
收藏
页码:9109 / 9119
页数:11
相关论文
共 72 条
  • [1] Mixture-based Feature Space Learning for Few-shot Image Classification
    Afrasiyabi, Arman
    Lalonde, Jean-Francois
    Gagne, Christian
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9021 - 9031
  • [2] Andrychowicz M, 2016, ADV NEUR IN, V29
  • [3] Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning
    Baik, Sungyong
    Choi, Janghoon
    Kim, Heewon
    Cho, Dohee
    Min, Jaesik
    Lee, Kyoung Mu
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9445 - 9454
  • [4] Bertinetto L., 2019, 7 INT C LEARN REPR I
  • [5] Caron M., 2020, ARXIV200609882, P1
  • [6] Caron Mathilde, 2021, ICCV
  • [7] Chen Boyu, 2021, ARXIV210803428
  • [8] Chen Tianyi, 2021, ADV NEURAL INFORM PR
  • [9] Chen W.-Y., 2019, INT C LEARN REPR, P1
  • [10] Pareto Self-Supervised Training for Few-Shot Learning
    Chen, Zhengyu
    Ge, Jixie
    Zhan, Heshen
    Huang, Siteng
    Wang, Donglin
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13658 - 13667