A Global-Local Self-Adaptive Network for Drone-View Object Detection

被引:150
作者
Deng, Sutao [1 ]
Li, Shuai [1 ,2 ]
Xie, Ke [3 ]
Song, Wenfeng [1 ]
Liao, Xiao [3 ]
Hao, Aimin [1 ,2 ]
Qin, Hong [4 ]
机构
[1] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
[2] Pengcheng Lab, Shenzhen 518055, Peoples R China
[3] State Grid Informat & Telecommun Grp Co Ltd, Beijing 100052, Peoples R China
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
基金
中国国家自然科学基金; 北京市自然科学基金; 美国国家科学基金会;
关键词
Detectors; Object detection; Training; Training data; Proposals; Feature extraction; Convolution; Drone-view object detection; tiny-scale object detection; object detection in crowded regions; coarse-to-fine adaptive detector;
D O I
10.1109/TIP.2020.3045636
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Directly benefiting from the deep learning methods, object detection has witnessed a great performance boost in recent years. However, drone-view object detection remains challenging for two main reasons: (1) Objects of tiny-scale with more blurs w.r.t. ground-view objects offer less valuable information towards accurate and robust detection; (2) The unevenly distributed objects make the detection inefficient, especially for regions occupied by crowded objects. Confronting such challenges, we propose an end-to-end global-local self-adaptive network (GLSAN) in this paper. The key components in our GLSAN include a global-local detection network (GLDN), a simple yet efficient self-adaptive region selecting algorithm (SARSA), and a local super-resolution network (LSRN). We integrate a global-local fusion strategy into a progressive scale-varying network to perform more precise detection, where the local fine detector can adaptively refine the target's bounding boxes detected by the global coarse detector via cropping the original images for higher-resolution detection. The SARSA can dynamically crop the crowded regions in the input images, which is unsupervised and can be easily plugged into the networks. Additionally, we train the LSRN to enlarge the cropped images, providing more detailed information for finer-scale feature extraction, helping the detector distinguish foreground and background more easily. The SARSA and LSRN also contribute to data augmentation towards network training, which makes the detector more robust. Extensive experiments and comprehensive evaluations on the VisDrone2019-DET benchmark dataset and UAVDT dataset demonstrate the effectiveness and adaptivity of our method. Towards an industrial application, our network is also applied to a DroneBolts dataset with proven advantages. Our source codes have been available at https://github.com/dengsutao/glsan.
引用
收藏
页码:1556 / 1569
页数:14
相关论文
共 43 条
[1]  
Alexe Bogdan, 2012, Advances in Neural Information Processing Systems, P2
[2]  
[Anonymous], 2016, P 24 INT C NUCL ENG
[3]   Supporting Selective Undo for Refactoring [J].
Cheng, Xiao ;
Chen, Yuting ;
Hu, Zhenjiang ;
Zan, Tao ;
Liu, Mengyu ;
Zhong, Hao ;
Zhao, Jianjun .
2016 IEEE 23RD INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION, AND REENGINEERING (SANER), VOL 1, 2016, :13-23
[4]   VisDrone-DET2019: The Vision Meets Drone Object Detection in Image Challenge Results [J].
Du, Dawei ;
Zhu, Pengfei ;
Wen, Longyin ;
Bian, Xiao ;
Ling, Haibin ;
Hu, Qinghua ;
Peng, Tao ;
Zheng, Jiayu ;
Wang, Xinyao ;
Zhang, Yue ;
Bo, Liefeng ;
Shi, Hailin ;
Zhu, Rui ;
Kumar, Aashish ;
Li, Aijin ;
Zinollayev, Almaz ;
Askergaliyev, Anuar ;
Schumann, Arne ;
Mao, Binjie ;
Lee, Byeongwon ;
Liu, Chang ;
Chen, Changrui ;
Pan, Chunhong ;
Huo, Chunlei ;
Yu, Da ;
Cong, Dechun ;
Zeng, Dening ;
Pailla, Dheeraj Reddy ;
Li, Di ;
Wang, Dong ;
Cho, Donghyeon ;
Zhang, Dongyu ;
Bai, Furui ;
Jose, George ;
Gao, Guangyu ;
Liu, Guizhong ;
Xiong, Haitao ;
Qi, Hao ;
Wang, Haoran ;
Qiu, Heqian ;
Li, Hongliang ;
Lu, Huchuan ;
Kim, Ildoo ;
Kim, Jaekyum ;
Shen, Jane ;
Lee, Jihoon ;
Ge, Jing ;
Xu, Jingjing ;
Zhou, Jingkai ;
Meier, Jonas .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, :213-226
[5]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[6]   Detecting Small Objects Using a Channel-Aware Deconvolutional Network [J].
Duan, Kaiwen ;
Du, Dawei ;
Qi, Honggang ;
Huang, Qingming .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (06) :1639-1652
[7]  
Everingham M., 2010, INT J COMPUT VISION, V88, P303, DOI DOI 10.1007/s11263-009-0275-4
[8]   Dynamic Zoom-in Network for Fast Object Detection in Large Images [J].
Gao, Mingfei ;
Yu, Ruichi ;
Li, Ang ;
Morariu, Vlad I. ;
Davis, Larry S. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6926-6935
[9]   Rich feature hierarchies for accurate object detection and semantic segmentation [J].
Girshick, Ross ;
Donahue, Jeff ;
Darrell, Trevor ;
Malik, Jitendra .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :580-587
[10]  
He K., 2017, IEEE I CONF COMP VIS, P2961, DOI DOI 10.1109/ICCV.2017.322