DN-DETR: Accelerate DETR Training by Introducing Query DeNoising

被引:372
作者
Li, Feng [1 ,2 ,5 ]
Zhang, Hao [1 ,2 ,5 ]
Liu, Shilong [2 ,3 ,5 ]
Guo, Jian [2 ]
Ni, Lionel M. [1 ,4 ]
Zhang, Lei [2 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[2] Int Digital Econ Acad IDEA, Shenzhen, Peoples R China
[3] Tsinghua Univ, Beijing, Peoples R China
[4] Hong Kong Univ Sci & Technol, Guangzhou, Peoples R China
[5] IDEA, Shenzhen, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.01325
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present in this paper a novel denoising training method to speedup DETR (DEtection TRansformer) training and offer a deepened understanding of the slow convergence issue of DETR-like methods. We show that the slow convergence results from the instability of bipartite graph matching which causes inconsistent optimization goals in early training stages. To address this issue, except for the Hungarian loss, our method additionally feeds ground-truth bounding boxes with noises into Transformer decoder and trains the model to reconstruct the original boxes, which effectively reduces the bipartite graph matching difficulty and leads to a faster convergence. Our method is universal and can be easily plugged into any DETR-like methods by adding dozens of lines of code to achieve a remarkable improvement. As a result, our DN-DETR results in a remarkable improvement (+1.9AP) under the same setting and achieves the best result (AP 43.4 and 48.6 with 12 and 50 epochs of training respectively) among DETR-like methods with ResNet-50 backbone. Compared with the baseline under the same setting, DN-DETR achieves comparable performance with 50% training epochs. Code is available at https://github.com/FengLi-ust/DN-DETR.
引用
收藏
页码:13609 / 13617
页数:9
相关论文
共 19 条
  • [1] Carion N., 2020, EUROPEAN C COMPUTER, V12346, P213, DOI 10.1007/978-3-030-58452-8_13
  • [2] Chen T., 2021, PIX2SEQ LANGUAGE MOD
  • [3] Dynamic DETR: End-to-End Object Detection with Dynamic Attention
    Dai, Xiyang
    Chen, Yinpeng
    Yang, Jianwei
    Zhang, Pengchuan
    Yuan, Lu
    Zhang, Lei
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 2968 - 2977
  • [4] Fenoaltea Enrico Maria, 2021, PHYS REPORTS
  • [5] Gao Peng, 2021, ARXIV210107448
  • [6] Girshick R., 2017, RICH FEATURE HIERARC, DOI [DOI 10.1109/CVPR.2014.81, 10.1109/cvpr.2014.81]
  • [7] He K., 2016, 2016 IEEE C COMP VIS, DOI [DOI 10.1109/CVPR.2016.90, 10.1109/CVPR.2016.90]
  • [8] Joseph RK, 2016, CRIT POL ECON S ASIA, P1
  • [9] Focal Loss for Dense Object Detection
    Lin, Tsung-Yi
    Goyal, Priya
    Girshick, Ross
    He, Kaiming
    Dollar, Piotr
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (02) : 318 - 327
  • [10] Microsoft COCO: Common Objects in Context
    Lin, Tsung-Yi
    Maire, Michael
    Belongie, Serge
    Hays, James
    Perona, Pietro
    Ramanan, Deva
    Dollar, Piotr
    Zitnick, C. Lawrence
    [J]. COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 : 740 - 755