Adaptive feature fusion with attention mechanism for multi-scale target detection

被引:35
作者
Ju, Moran [1 ,2 ,3 ,4 ,5 ]
Luo, Jiangning [6 ]
Wang, Zhongbo [1 ,2 ,3 ,4 ,5 ]
Luo, Haibo [1 ,2 ,4 ,5 ]
机构
[1] Chinese Acad Sci, Shenyang Inst Automat, Shenyang 110016, Liaoning, Peoples R China
[2] Chinese Acad Sci, Inst Robot & Intelligent Mfg, Shenyang 110016, Liaoning, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[4] Chinese Acad Sci, Key Lab Opt Elect Informat Proc, Shenyang 110016, Liaoning, Peoples R China
[5] Key Lab Image Understanding & Comp Vis, Shenyang 110016, Liaoning, Peoples R China
[6] McGill Univ, Montreal, PQ H3A 0G4, Canada
关键词
Deep learning; Target detection; Adaptive feature fusion; Attention mechanism; RECOGNITION;
D O I
10.1007/s00521-020-05150-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To detect the targets of different sizes, multi-scale output is used by target detectors such as YOLO V3 and DSSD. To improve the detection performance, YOLO V3 and DSSD perform feature fusion by combining two adjacent scales. However, the feature fusion only between the adjacent scales is not sufficient. It hasn't made advantage of the features at other scales. What is more, as a common operation for feature fusion, concatenating can't provide a mechanism to learn the importance and correlation of the features at different scales. In this paper, we propose adaptive feature fusion with attention mechanism (AFFAM) for multi-scale target detection. AFFAM utilizes pathway layer and subpixel convolution layer to resize the feature maps, which is helpful to learn better and complex feature mapping. In addition, AFFAM utilizes global attention mechanism and spatial position attention mechanism, respectively, to learn the correlation of the channel features and the importance of the spatial features at different scales adaptively. Finally, we combine AFFAM with YOLO V3 to build an efficient multi-scale target detector. The comparative experiments are conducted on PASCAL VOC dataset, KITTI dataset and Smart UVM dataset. Compared with the state-of-the-art target detectors, YOLO V3 with AFFAM achieved 84.34% mean average precision (mAP) at 19.9 FPS on PASCAL VOC dataset, 87.2% mAP at 21 FPS on KITTI dataset and 99.22% mAP at 20.6 FPS on Smart UVM dataset which outperforms other advanced target detectors.
引用
收藏
页码:2769 / 2781
页数:13
相关论文
共 40 条
[1]  
[Anonymous], 2018, COMP CNN BASED FACE
[2]  
[Anonymous], 2017, EVOLVING BOXES FAST
[3]  
[Anonymous], 2018, P IEEE C COMP VIS PA
[4]  
[Anonymous], 2017, ACM, DOI DOI 10.1145/3065386
[5]  
[Anonymous], 2020, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2018.2858826
[6]  
[Anonymous], 2018, REMOTENET EFFICIENT
[7]  
[Anonymous], 2016, P IEEE C COMP VIS PA
[8]  
[Anonymous], P ADV NEUR INF PROC
[9]  
[Anonymous], 2018, DETECTING COUNTING T
[10]  
[Anonymous], 2018, P IEEE C COMP VIS PA