Conditional DETR for Fast Training Convergence

被引:472
作者
Meng, Depu [1 ,4 ]
Chen, Xiaokang [2 ,4 ]
Fan, Zejia [2 ,4 ]
Zeng, Gang [2 ]
Li, Houqiang [1 ]
Yuan, Yuhui [3 ]
Sun, Lei [3 ]
Wang, Jingdong [3 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Peking Univ, Beijing, Peoples R China
[3] Microsoft Res Asia, Beijing, Peoples R China
[4] Microsoft Res, Beijing, Peoples R China
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV48922.2021.00363
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7x faster for the backbones R50 and R101 and 10x faster for stronger backbones DC5-R50 and DC5-R101. Code is available at https://github.com/Atten4Vis/ConditionalDETR.
引用
收藏
页码:3631 / 3640
页数:10
相关论文
共 53 条
[1]  
[Anonymous], 2016, NEURIPS
[2]  
[Anonymous], 2019, NeurIPS
[3]  
[Anonymous], 2021, CVPR, DOI DOI 10.1109/TSMC.2019.2958072
[4]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00094
[5]  
Bochkovskiy A., 2020, Technical Report
[6]  
Carion Nicolas, 2020, EUROPEAN C COMPUTER
[7]   Domain Adaptive Image-to-image Translation [J].
Chen, Ying-Cong ;
Xu, Xiaogang ;
Jia, Jiaya .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :5273-5282
[8]  
Dai Zhigang, 2020, CORR
[9]   CenterNet: Keypoint Triplets for Object Detection [J].
Duan, Kaiwen ;
Bai, Song ;
Xie, Lingxi ;
Qi, Honggang ;
Huang, Qingming ;
Tian, Qi .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6568-6577
[10]  
Gao P., 2021, P IEEE CVF C COMP VI, P3621