Few-shot open-set recognition via pairwise discriminant aggregation

被引:1
作者
Jin, Jian [1 ]
Shen, Yang [1 ]
Fu, Zhenyong [1 ]
Yang, Jian [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Key Lab Intelligent Percept & Syst High Dimens Inf, PCA Lab,Minist Educ, Nanjing 210094, Peoples R China
基金
美国国家科学基金会;
关键词
Few-shot learning; Open-set recognition; Two-level recognition framework; Unknown representation;
D O I
10.1016/j.neucom.2024.128214
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot open-set recognition (FSOR) aims to develop models capable of generalizing to new tasks for both unknown detection and known classification with limited labeled data. Previous FSOR methods lack a comprehensive definition of the open space in the few-shot scenario, making models susceptible to overgeneralization. In this paper, we propose a novel method called Pairwise Discriminant Aggregation (PDAgg), addressing FSOR within a two-level recognition framework. PDAgg unifies the diverse optimization goals of FSOR at the pair level and provides a reasonable aggregate-level representation for unknown samples, thereby greatly enhancing model generalization to open space in the few-shot context. Specifically, PDAgg treats support-query pairs as the basic recognition units, which are adapted to a pair-specific feature space by enhancing pairwise representative features and incorporating a global knowledge context. Binary discriminant analyses are performed on adapted pair embeddings to estimate pair-level discriminant scores, which are then jointly aggregated to achieve both unknown detection and few-shot classification. Extensive experiments demonstrate that our method delivers comparable and even better performance with less extra information compared to the existing FSOR methods.
引用
收藏
页数:12
相关论文
共 52 条
[1]   Latent Space Autoregression for Novelty Detection [J].
Abati, Davide ;
Porrello, Angelo ;
Calderara, Simone ;
Cucchiara, Rita .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :481-490
[2]  
[Anonymous], 2015, P ICML DEEP LEARN WO
[3]   Towards Open Set Deep Networks [J].
Bendale, Abhijit ;
Boult, Terrance E. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1563-1572
[4]  
Bertinetto Luca, 2018, INT C LEARNING REPRE
[5]  
Che YJ, 2023, PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, P3505
[6]   CAD: Co-Adapting Discriminative Features for Improved Few-Shot Classification [J].
Chikontwe, Philip ;
Kim, Soopil ;
Park, Sang Hyun .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :14534-14543
[7]   A tutorial on the cross-entropy method [J].
De Boer, PT ;
Kroese, DP ;
Mannor, S ;
Rubinstein, RY .
ANNALS OF OPERATIONS RESEARCH, 2005, 134 (01) :19-67
[8]   Learning Relative Feature Displacement for Few-Shot Open-Set Recognition [J].
Deng, Shule ;
Yu, Jin-Gang ;
Wu, Zihao ;
Gao, Hongxia ;
Li, Yansheng ;
Yang, Yang .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :5763-5774
[9]  
Doersch C., 2020, Advances in Neural Information Processing Systems, V33, P21981
[10]   SelfNet: A semi-supervised local Fisher discriminant network for few-shot learning [J].
Feng, Rui ;
Ji, Hongbing ;
Zhu, Zhigang ;
Wang, Lei .
NEUROCOMPUTING, 2022, 512 :352-362