Feature Enhancement Network for Object Detection in Optical Remote Sensing Images

被引:61
作者
Cheng, Gong [1 ]
Lang, Chunbo [1 ]
Wu, Maoxiong [1 ]
Xie, Xingxing [1 ]
Yao, Xiwen [1 ]
Han, Junwei [1 ]
机构
[1] Northwestern Polytech Univ, Sch Automat, Xian 710129, Peoples R China
来源
JOURNAL OF REMOTE SENSING | 2021年 / 2021卷
基金
美国国家科学基金会;
关键词
D O I
10.34133/2021/9805389
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Automatic and robust object detection in remote sensing images is of vital significance in real-world applications such as land resource management and disaster rescue. However, poor performance arises when the state-of-the-art natural image detection algorithms are directly applied to remote sensing images, which largely results from the variations in object scale, aspect ratio, indistinguishable object appearances, and complex background scenario. In this paper, we propose a novel Feature Enhancement Network (FENet) for object detection in optical remote sensing images, which consists of a Dual Attention Feature Enhancement (DAFE) module and a Context Feature Enhancement (CFE) module. Specifically, the DAFE module is introduced to highlight the network to focus on the distinctive features of the objects of interest and suppress useless ones by jointly recalibrating the spatial and channel feature responses. The CFE module is designed to capture global context cues and selectively strengthen class-aware features by leveraging image-level contextual information that indicates the presence or absence of the object classes. To this end, we employ a context encoding loss to regularize the model training which promotes the object detector to understand the scene better and narrows the probable object categories in prediction. We achieve our proposed FENet by unifying DAFE and CFE into the framework of Faster R-CNN. In the experiments, we evaluate our proposed method on two large-scale remote sensing image object detection datasets including DIOR and DOTA and demonstrate its effectiveness compared with the baseline methods.
引用
收藏
页数:14
相关论文
共 65 条
[21]   Rich feature hierarchies for accurate object detection and semantic segmentation [J].
Girshick, Ross ;
Donahue, Jeff ;
Darrell, Trevor ;
Malik, Jitendra .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :580-587
[22]   Align Deep Features for Oriented Object Detection [J].
Han, Jiaming ;
Ding, Jian ;
Li, Jie ;
Xia, Gui-Song .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
[23]   Object Detection in Optical Remote Sensing Images Based on Weakly Supervised Learning and High-Level Feature Learning [J].
Han, Junwei ;
Zhang, Dingwen ;
Cheng, Gong ;
Guo, Lei ;
Ren, Jinchang .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2015, 53 (06) :3325-3337
[24]  
He KM, 2017, IEEE I CONF COMP VIS, P2980, DOI [10.1109/TPAMI.2018.2844175, 10.1109/ICCV.2017.322]
[25]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[26]   Graph Convolutional Networks for Hyperspectral Image Classification [J].
Hong, Danfeng ;
Gao, Lianru ;
Yao, Jing ;
Zhang, Bing ;
Plaza, Antonio ;
Chanussot, Jocelyn .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (07) :5966-5978
[27]   More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification [J].
Hong, Danfeng ;
Gao, Lianru ;
Yokoya, Naoto ;
Yao, Jing ;
Chanussot, Jocelyn ;
Du, Qian ;
Zhang, Bing .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (05) :4340-4354
[28]   An Augmented Linear Mixing Model to Address Spectral Variability for Hyperspectral Unmixing [J].
Hong, Danfeng ;
Yokoya, Naoto ;
Chanussot, Jocelyn ;
Zhu, Xiao Xiang .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (04) :1923-1938
[29]   Relation Networks for Object Detection [J].
Hu, Han ;
Gu, Jiayuan ;
Zhang, Zheng ;
Dai, Jifeng ;
Wei, Yichen .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3588-3597
[30]  
Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]