In defense of local descriptor-based few-shot object detection

被引:0
|
作者
Zhou, Shichao [1 ]
Li, Haoyan [1 ]
Wang, Zhuowei [1 ]
Zhang, Zekai [1 ]
机构
[1] Beijing Informat Sci & Technol Univ, Key Lab Informat & Commun Syst, Minist Informat Ind, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
few-shot learning; local descriptors; contextual features; kernel method; visual similarity;
D O I
10.3389/fnins.2024.1349204
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
State-of-the-art image object detection computational models require an intensive parameter fine-tuning stage (using deep convolution network, etc). with tens or hundreds of training examples. In contrast, human intelligence can robustly learn a new concept from just a few instances (i.e., few-shot detection). The distinctive perception mechanisms between these two families of systems enlighten us to revisit classical handcraft local descriptors (e.g., SIFT, HOG, etc.) as well as non-parametric visual models, which innately require no learning/training phase. Herein, we claim that the inferior performance of these local descriptors mainly results from a lack of global structure sense. To address this issue, we refine local descriptors with spatial contextual attention of neighbor affinities and then embed the local descriptors into discriminative subspace guided by Kernel-InfoNCE loss. Differing from conventional quantization of local descriptors in high-dimensional feature space or isometric dimension reduction, we actually seek a brain-inspired few-shot feature representation for the object manifold, which combines data-independent primitive representation and semantic context learning and thus helps with generalization. The obtained embeddings as pattern vectors/tensors permit us an accelerated but non-parametric visual similarity computation as the decision rule for final detection. Our approach to few-shot object detection is nearly learning-free, and experiments on remote sensing imageries (approximate 2-D affine space) confirm the efficacy of our model.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Local descriptor-based multi-prototype network for few-shot Learning
    Huang, Hongwei
    Wu, Zhangkai
    Li, Wenbin
    Huo, Jing
    Gao, Yang
    PATTERN RECOGNITION, 2021, 116
  • [2] Local descriptor-based spatial cross attention network for few-shot learning
    Huang, Jiamin
    Zhao, Lina
    Yang, Hongwei
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (10) : 4747 - 4759
  • [3] Revisiting Local Descriptor for Improved Few-Shot Classification
    He, Jun
    Hong, Richang
    Liu, Xueliang
    Xu, Mingliang
    Sun, Qianru
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (02)
  • [4] A Closer Look at Few-Shot Object Detection
    Liu, Yuhao
    Dong, Le
    He, Tengyang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 430 - 447
  • [5] Spatial reasoning for few-shot object detection
    Kim, Geonuk
    Jung, Hong-Gyu
    Lee, Seong-Whan
    PATTERN RECOGNITION, 2021, 120
  • [6] Few-Shot Object Detection: A Comprehensive Survey
    Koehler, Mona
    Eisenbach, Markus
    Gross, Horst-Michael
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 11958 - 11978
  • [7] Few-shot object detection based on positive-sample improvement
    Ouyang, Yan
    Wang, Xin-qing
    Hu, Rui-zhe
    Xu, Hong -hui
    DEFENCE TECHNOLOGY, 2023, 28 : 74 - 86
  • [8] Few-Shot Object Detection Algorithm Based on Adaptive Relation Distillation
    Duan, Danting
    Zhong, Wei
    Peng, Liang
    Ran, Shuang
    Hu, Fei
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII, 2024, 14436 : 328 - 339
  • [9] Meta-Learning-Based Incremental Few-Shot Object Detection
    Cheng, Meng
    Wang, Hanli
    Long, Yu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (04) : 2158 - 2169
  • [10] Critic Boosting Attention Network on Local Descriptor for Few-shot Learning
    Shi, Chengzhang
    Own, Chung-Ming
    Chou, Ching-chih
    Guo, Bailu
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,