GFENet: Generalization Feature Extraction Network for Few-Shot Object Detection

被引:0
|
作者
Ke, Xiao [1 ]
Chen, Qiuqin [1 ]
Liu, Hao [1 ]
Guo, Wenzhong [1 ]
机构
[1] Fuzhou Univ, Coll Comp & Data Sci, Fujian Prov Key Lab Networking Comp & Intelligent, Fuzhou 350116, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Data models; Object detection; Training; Adaptation models; Computational modeling; Shape; Transfer learning; few-shot learning; object detection; data augmentation; self-distillation;
D O I
10.1109/TCSVT.2024.3435977
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Few-shot object detection achieves rapid detection of novel-class objects by training detectors with a minimal number of novel-class annotated instances. Transfer learning-based few-shot object detection methods have shown better performance compared to other methods such as meta-learning. However, when training with base-class data, the model may gradually bias towards learning the characteristics of each category in the base-class data, which could result in a decrease in learning ability during fine-tuning on novel classes, and further overfitting due to data scarcity. In this paper, we first find that the generalization performance of the base-class model has a significant impact on novel class detection performance and proposes a generalization feature extraction network framework to address this issue. This framework perturbs the base model during training to encourage it to learn generalization features and solves the impact of changes in object shape and size on overall detection performance, improving the generalization performance of the base model. Additionally, we propose a feature-level data augmentation method based on self-distillation to further enhance the overall generalization ability of the model. Our method achieves state-of-the-art results on both the COCO and PASCAL VOC datasets, with a 6.94% improvement on the PASCAL VOC 10-shot dataset.
引用
收藏
页码:12741 / 12755
页数:15
相关论文
共 50 条
  • [1] Feature reconstruction and metric based network for few-shot object detection
    Li, Yuewen
    Feng, Wenquan
    Lyu, Shuchang
    Zhao, Qi
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 227
  • [2] Few-Shot Air Object Detection Network
    Cai, Wei
    Wang, Xin
    Jiang, Xinhao
    Yang, Zhiyong
    Di, Xingyu
    Gao, Weijie
    ELECTRONICS, 2023, 12 (19)
  • [3] Efficient Feature Enhancement for Few-Shot Object Detection
    Li, Lin
    Lei, Zhou
    Chen, Shengbo
    Xu, Qingguo
    2022 IEEE 6TH ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IAEAC), 2022, : 1206 - 1210
  • [4] Few-shot Object Detection via Feature Reweighting
    Kang, Bingyi
    Liu, Zhuang
    Wang, Xin
    Yu, Fisher
    Feng, Jiashi
    Darrell, Trevor
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8419 - 8428
  • [5] Few-Shot Object Detection with Local Feature Enhancement and Feature Interrelation
    Lai, Hefeng
    Zhang, Peng
    ELECTRONICS, 2023, 12 (19)
  • [6] Temporal Speciation Network for Few-Shot Object Detection
    Zhao, Xiaowei
    Liu, Xianglong
    Ma, Yuqing
    Bai, Shihao
    Shen, Yifan
    Hao, Zeyu
    Liu, Aishan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8267 - 8278
  • [7] Orthogonal Progressive Network for Few-shot Object Detection
    Wang, Bingxin
    Yu, Dehong
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 264
  • [8] FRDet: Few-shot object detection via feature reconstruction
    Chen, Zhihao
    Mao, Yingchi
    Qian, Yong
    Pan, Zhenxiang
    Xu, Shufang
    IET IMAGE PROCESSING, 2023, 17 (12) : 3599 - 3615
  • [9] Few-Shot Object Detection via Variational Feature Aggregation
    Han, Jiaming
    Ren, Yuqiang
    Ding, Jian
    Yan, Ke
    Xia, Gui-Song
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 755 - 763
  • [10] Few-shot object detection based on self-supervised feature pyramid network
    Lv, Wen
    Qi, Xinwei
    Shi, Hongbo
    Tan, Shuai
    Song, Bing
    Tao, Yang
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)