Military Vehicle Object Detection Based on Hierarchical Feature Representation and Refined Localization

被引:8
作者
Ouyang, Yan [1 ]
Wang, Xinqing [1 ]
Hu, Ruizhe [1 ]
Xu, Honghui [1 ]
Shao, Faming [1 ]
机构
[1] Army Engn Univ, Coll Field Engn, Dept Mech Engn, Nanjing 210007, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Military vehicles; Object detection; Feature extraction; Task analysis; Detectors; Location awareness; Reinforcement learning; Military vehicle objects; object detection; reinforcement learning; hierarchical feature representation;
D O I
10.1109/ACCESS.2022.3207153
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Military vehicle object detection technology in complex environments is the basis for the implementation of reconnaissance and tracking tasks for weapons and equipment, and is of great significance for information and intelligent combat. In response to the poor performance of traditional detection algorithms in military vehicle detection, we propose a military vehicle detection method based on hierarchical feature representation and reinforcement learning refinement localization, referred to as MVODM. First, for the military vehicle detection task, we construct a reliable dataset MVD. Second, we design two strategies, hierarchical feature representation and reinforcement learning-based refinement localization, to improve the detector. The hierarchical feature representation strategy can help the detector select the feature representation layer suitable for the object scale, and the reinforcement learning-based refinement localization strategy can improve the accuracy of the object localization boxes. The combination of these two strategies can effectively improve the performance of the detector. Finally, the experimental results on the homemade dataset show that our proposed MVODM has excellent detection performance and can better accomplish the detection task of military vehicles.
引用
收藏
页码:99897 / 99908
页数:12
相关论文
共 39 条
[1]  
[Anonymous], 2009, Integral channel features, DOI [DOI 10.5244/C.23.91, 10.5244/C.23.91]
[2]  
Bochkovskiy A, 2020, Arxiv, DOI arXiv:2004.10934
[3]   Active Object Localization with Deep Reinforcement Learning [J].
Caicedo, Juan C. ;
Lazebnik, Svetlana .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2488-2496
[4]   RDRL: A Recurrent Deep Reinforcement Learning Scheme for Dynamic Spectrum Access in Reconfigurable Wireless Networks [J].
Chen, Miaojiang ;
Liu, Anfeng ;
Liu, Wei ;
Ota, Kaoru ;
Dong, Mianxiong ;
Xiong, N. Neal .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (02) :364-376
[5]   A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems [J].
Chen, Miaojiang ;
Liu, Wei ;
Wang, Tian ;
Zhang, Shaobo ;
Liu, Anfeng .
KNOWLEDGE-BASED SYSTEMS, 2022, 235
[6]  
Dai JF, 2016, ADV NEUR IN, V29
[7]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893
[8]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[9]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[10]   Fast R-CNN [J].
Girshick, Ross .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1440-1448