CRPN: DISTINGUISH NOVEL CATEGORIES VIA CLASS-RELEVANT REGION PROPOSAL NETWORK FOR FEW-SHOT OBJECT DETECTION

被引:0
|
作者
Wang, Han [1 ]
Li, Yali [1 ]
Wang, Shengjin [1 ]
机构
[1] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRis, Dept Elect Engn, Beijing, Peoples R China
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2022年
关键词
Object Detection; Novel Class; Few-Shot Learning; Deep Metric Learning; RPN;
D O I
10.1109/ICASSP43922.2022.9746445
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Few-shot object detection (FSOD) has attracted more attention in computer vision, where only very few training examples are presented during model learning process. A commonly-overlooked issue in FSOD is that novel classes are usually classified as background clutters in the pre-training process. Another difficulty of FSOD is that the detection performance degrades especially under higher IoU thresholds since previous deep metric learning (DML) requires frozen region proposals without class-relevant box regression. In this work, we propose a Class-relevant Region Proposal Network (CRPN). The CRPN can derive network parameters for novel classes from pre-trained convolution kernels according to their feature similarity, which is used to eliminate the above mentioned adverse effects and improve the performance of few-shot object detection. The proposed CPRN is able to kill two birds with one stone and has two main contributions: (1) transfer a region proposal network pre-trained on base classes to novel classes; (2) perform class-dependent bounding-box regression which previous DML classifier lacks. For experimental testing, we achieve 12.7% AP75 in MS COCO dataset and 28.6% AP75 in ImageNet2015 dataset under the few-shot setting introduced by previous works, which exceeds the state-of-the-art by a certain margin.
引用
收藏
页码:2230 / 2234
页数:5
相关论文
共 50 条
  • [1] Few-Shot Object Detection with Proposal Balance Refinement
    Kim, Sueyeon
    Nam, Woo-Jeoung
    Lee, Seong-Whan
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 4700 - 4707
  • [2] Few-Shot Object Detection via Sample Processing
    Xu, Honghui
    Wang, Xinqing
    Shao, Faming
    Duan, Baoguo
    Zhang, Peng
    IEEE ACCESS, 2021, 9 (09): : 29207 - 29221
  • [3] Few-Shot Object Detection via Knowledge Transfer
    Kim, Geonuk
    Jung, Hong-Gyu
    Lee, Seong-Whan
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3564 - 3569
  • [4] Few-Shot Object Detection via Metric Learning
    Zhu Min
    Zhang Chongyang
    FOURTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2021), 2022, 12084
  • [5] Proposal Distribution Calibration for Few-Shot Object Detection
    Li, Bohao
    Liu, Chang
    Shi, Mengnan
    Chen, Xiaozhong
    Ji, Xiangyang
    Ye, Qixiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1911 - 1918
  • [6] Balancing Attention to Base and Novel Categories for Few-Shot Object Detection in Remote Sensing Imagery
    Zhu, Zining
    Wang, Peijin
    Diao, Wenhui
    Yang, Jinze
    Kong, Lingyu
    Wang, Hongqi
    Sun, Xian
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
  • [7] FRDet: Few-shot object detection via feature reconstruction
    Chen, Zhihao
    Mao, Yingchi
    Qian, Yong
    Pan, Zhenxiang
    Xu, Shufang
    IET IMAGE PROCESSING, 2023, 17 (12) : 3599 - 3615
  • [8] Few-Shot Object Detection with Memory Contrastive Proposal Based on Semantic Priors
    Xiao, Linlin
    Xu, Huahu
    Xiao, Junsheng
    Huang, Yuzhe
    ELECTRONICS, 2023, 12 (18)
  • [9] Few-shot object detection via baby learning
    Vu, Anh-Khoa Nguyen
    Nguyen, Nhat-Duy
    Nguyen, Khanh-Duy
    Nguyen, Vinh-Tiep
    Ngo, Thanh Duc
    Do, Thanh-Toan
    Nguyen, Tam V.
    IMAGE AND VISION COMPUTING, 2022, 120
  • [10] Temporal Speciation Network for Few-Shot Object Detection
    Zhao, Xiaowei
    Liu, Xianglong
    Ma, Yuqing
    Bai, Shihao
    Shen, Yifan
    Hao, Zeyu
    Liu, Aishan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8267 - 8278