Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient ?

被引:2
|
作者
Bouniot, Quentin [1 ,2 ]
Loesch, Angelique [1 ]
Audigier, Romaric [1 ]
Habrard, Amaury [2 ,3 ]
机构
[1] Univ Paris Saclay, CEA, LIST, F-91120 Palaiseau, France
[2] Univ Lyon, UJM St Etienne, CNRS, IOGS,Lab Hubert Curien,UMR 5516, F-42023 St Etienne, France
[3] Inst Univ France IUF, Paris, France
关键词
D O I
10.1109/WACV56688.2023.00016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For specialized and dense downstream tasks such as object detection, labeling data requires expertise and can be very expensive, making few-shot and semi-supervised models much more attractive alternatives. While in the few-shot setup we observe that transformer-based object detectors perform better than convolution-based two-stage models for a similar amount of parameters, they are not as effective when used with recent approaches in the semi-supervised setting. In this paper, we propose a semi-supervised method tailored for the current state-of-the-art object detector Deformable DETR in the few-annotation learning setup using a student-teacher architecture, which avoids relying on a sensitive post-processing of the pseudo-labels generated by the teacher model. We evaluate our method on the semi-supervised object detection benchmarks COCO and Pascal VOC, and it outperforms previous methods, especially when annotations are scarce. We believe that our contributions open new possibilities to adapt similar object detection methods in this setup as well.
引用
收藏
页码:75 / 84
页数:10
相关论文
共 50 条
  • [1] Transformer-based few-shot object detection in traffic scenarios
    Erjun Sun
    Di Zhou
    Yan Tian
    Zhaocheng Xu
    Xun Wang
    Applied Intelligence, 2024, 54 : 947 - 958
  • [2] Transformer-based few-shot object detection in traffic scenarios
    Sun, Erjun
    Zhou, Di
    Tian, Yan
    Xu, Zhaocheng
    Wang, Xun
    APPLIED INTELLIGENCE, 2024, 54 (01) : 947 - 958
  • [3] ACT-FRCNN: Progress Towards Transformer-Based Object Detection
    Zulfqar, Sukana
    Elgamal, Zenab
    Zia, Muhammad Azam
    Razzaq, Abdul
    Ullah, Sami
    Dawood, Hussain
    ALGORITHMS, 2024, 17 (11)
  • [4] ON THE USE OF TRANSFORMER-BASED DETECTION MODELS FOR ACCURATE SLEEP EVENT ANNOTATION AND ANALYSIS
    Zahid, A. Neergaard
    Jonika, M.
    Hulgaard, P. F.
    Chen, M. Y.
    Morup, M.
    SLEEP MEDICINE, 2024, 115 : 412 - 412
  • [5] Learning Dynamic Query Combinations for Transformer-based Object Detection and Segmentation
    Cui, Yiming
    Yang, Linjie
    Yu, Haichao
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [6] Compositional Learning in Transformer-Based Human-Object Interaction Detection
    Zhuang, Zikun
    Qian, Ruihao
    Xie, Chi
    Liang, Shuang
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1038 - 1043
  • [7] Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation
    Li, Feng
    Zhang, Hao
    Xu, Huaizhe
    Liu, Shilong
    Zhang, Lei
    Ni, Lionel M.
    Shum, Heimg-Yeung
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 3041 - 3050
  • [8] A Transformer-Based Framework for Tiny Object Detection
    Liao, Yi-Kai
    Lin, Gong-Si
    Yeh, Mei-Chen
    2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 373 - 377
  • [9] Survey of Transformer-Based Object Detection Algorithms
    Li, Jian
    Du, Jianqiang
    Zhu, Yanchen
    Guo, Yongkun
    Computer Engineering and Applications, 2023, 59 (10) : 48 - 64
  • [10] Object detection using convolutional neural networks and transformer-based models: a review
    Shrishti Shah
    Jitendra Tembhurne
    Journal of Electrical Systems and Information Technology, 10 (1)