RankDNN: Learning to Rank for Few-Shot Learning

被引:0
|
作者
Guo, Qianyu [1 ,2 ]
Gong Haotong [1 ]
Wei, Xujun [1 ,3 ]
Fu, Yanwei [2 ]
Yu, Yizhou [4 ]
Zhang, Wenqiang [2 ,3 ]
Ge, Weifeng [1 ,2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Nebula AI Grp, Shanghai, Peoples R China
[2] Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Shanghai, Peoples R China
[4] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
KRONECKER PRODUCT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification. In comparison to image classification, ranking relation classification is sample efficient and domain agnostic. Besides, it provides a new perspective on few-shot learning and is complementary to state-of-the-art methods. The core component of our deep neural network is a simple MLP, which takes as input an image triplet encoded as the difference between two vector-Kronecker products, and outputs a binary relevance ranking order. The proposed RankMLP can be built on top of any state-of-the-art feature extractors, and our entire deep neural network is called the ranking deep neural network, or RankDNN. Meanwhile, RankDNN can be flexibly fused with other post-processing methods. During the meta test, RankDNN ranks support images according to their similarity with the query samples, and each query sample is assigned the class label of its nearest neighbor. Experiments demonstrate that RankDNN can effectively improve the performance of its baselines based on a variety of backbones and it outperforms previous state-of-the-art algorithms on multiple few-shot learning benchmarks, including miniImageNet, tieredImageNet, Caltech-UCSD Birds, and CIFAR-FS. Furthermore, experiments on the cross-domain challenge demonstrate the superior transferability of RankDNN.The code is available at: https://github.com/guoqianyu-alberta/RankDNN.
引用
收藏
页码:728 / 736
页数:9
相关论文
共 50 条
  • [21] Prototype Completion for Few-Shot Learning
    Zhang, Baoquan
    Li, Xutao
    Ye, Yunming
    Feng, Shanshan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12250 - 12268
  • [22] Few-Shot Learning With a Strong Teacher
    Ye, Han-Jia
    Ming, Lu
    Zhan, De-Chuan
    Chao, Wei-Lun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1425 - 1440
  • [23] Local Propagation for Few-Shot Learning
    Lifchitz, Yann
    Avrithis, Yannis
    Picard, Sylvaine
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 10457 - 10464
  • [24] Few-Shot Learning With Class Imbalance
    Ochal M.
    Patacchiola M.
    Vazquez J.
    Storkey A.
    Wang S.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (05): : 1348 - 1358
  • [25] Explore pretraining for few-shot learning
    Yan Li
    Jinjie Huang
    Multimedia Tools and Applications, 2024, 83 : 4691 - 4702
  • [26] Few-Shot Learning for Defence and Security
    Robinson, Todd
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [27] Few-Shot Learning for Image Denoising
    Jiang, Bo
    Lu, Yao
    Zhang, Bob
    Lu, Guangming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 4741 - 4753
  • [28] Few-shot Learning with Noisy Labels
    Liang, Kevin J.
    Rangrej, Samrudhdhi B.
    Petrovic, Vladan
    Hassner, Tal
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9079 - 9088
  • [29] Few-shot Continual Infomax Learning
    Gu, Ziqi
    Xu, Chunyan
    Yang, Jian
    Cui, Zhen
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 19167 - 19176
  • [30] Adaptive Subspaces for Few-Shot Learning
    Simon, Christian
    Koniusz, Piotr
    Nock, Richard
    Harandi, Mehrtash
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4135 - 4144