Iterative Fusion and Dual Enhancement for Accurate and Efficient Object Detection

被引:0
作者
Duan, Zhipeng [1 ]
Zhang, Zhiqiang [2 ]
Liu, Xinzhi [1 ]
Cheng, Guoan [1 ]
Xu, Liangfeng [1 ]
Zhan, Shu [1 ]
机构
[1] Hefei Univ Technol, Sch Comp Sci & Informat, Key Lab Knowledge Engn Big Data, Minist Educ, Hefei 230000, Peoples R China
[2] Anhui Med Univ, Hosp 2, Hefei 230000, Peoples R China
基金
中国国家自然科学基金;
关键词
Object detection; feature map; different scales; contextual information; feature fusion; feature enhancement;
D O I
10.1142/S0218126623502328
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Single Shot Multibox Detector (SSD) uses multi-scale feature maps to detect and recognize objects, which considers the advantages of both accuracy and speed, but it is still limited to detecting small-sized objects. Many researchers design new detectors to improve the accuracy by changing the structure of the multi-scale feature pyramid which has proved very useful. But most of them only simply merge several feature maps without making full use of the close connection between features with different scales. In contrast, a novel feature fusion module and an effective feature enhancement module is proposed, which can significantly improve the performance of the original SSD. In the feature fusion module, the feature pyramid is produced through iteratively fusing three feature maps with different receptive fields to obtain contextual information. In the feature enhancement module, the features are enhanced along the channel and spatial dimensions at the same time to improve their expression ability. Our network can achieve 82.5% mean Average Precision (mAP) on the VOC 2007 test, 81.4% mAP on the VOC 2012 test and 34.8% mAP on COCO test-dev2017, respectively, with the input size 512 x 512. Comparative experiments prove that our method outperforms many state-of-the-art detectors in both aspects of accuracy and speed.
引用
收藏
页数:16
相关论文
共 28 条
  • [1] Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks
    Bell, Sean
    Zitnick, C. Lawrence
    Bala, Kavita
    Girshick, Ross
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2874 - 2883
  • [2] Dai JF, 2016, ADV NEUR IN, V29
  • [3] The Pascal Visual Object Classes (VOC) Challenge
    Everingham, Mark
    Van Gool, Luc
    Williams, Christopher K. I.
    Winn, John
    Zisserman, Andrew
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) : 303 - 338
  • [4] The PASCAL Visual Object Classes Challenge: A Retrospective
    Everingham, Mark
    Eslami, S. M. Ali
    Van Gool, Luc
    Williams, Christopher K. I.
    Winn, John
    Zisserman, Andrew
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 111 (01) : 98 - 136
  • [5] Fu C. Y., ARXIV
  • [6] Girshick R, 2016, PROC CVPR IEEE, DOI [10.1109/CVPR.2016.91, DOI 10.1109/CVPR.2016.91]
  • [7] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [8] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/CVPR.2018.00745, 10.1109/TPAMI.2019.2913372]
  • [9] Receptive Field Fusion RetinaNet for Object Detection
    Huang, He
    Feng, Yong
    Zhou, MingLiang
    Qiang, Baohua
    Yan, Jielu
    Wei, Ran
    [J]. JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2021, 30 (10)
  • [10] Jeong J., ARXIV