EANet: Edge-Aware Network for the Extraction of Buildings from Aerial Images

被引:58
作者
Yang, Guang [1 ,2 ]
Zhang, Qian [1 ,2 ]
Zhang, Guixu [1 ,2 ]
机构
[1] East China Normal Univ, Shanghai Key Lab Multidimens Informat Proc, Shanghai 200241, Peoples R China
[2] East China Normal Univ, Sch Comp Sci & Technol, Shanghai 200062, Peoples R China
关键词
semantic segmentation; convolutional neural networks; building extraction; edge; multi-task learning;
D O I
10.3390/rs12132161
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Deep learning methods have been used to extract buildings from remote sensing images and have achieved state-of-the-art performance. Most previous work has emphasized the multi-scale fusion of features or the enhancement of more receptive fields to achieve global features rather than focusing on low-level details such as the edges. In this work, we propose a novel end-to-end edge-aware network, the EANet, and an edge-aware loss for getting accurate buildings from aerial images. Specifically, the architecture is composed of image segmentation networks and edge perception networks that, respectively, take charge of building prediction and edge investigation. The International Society for Photogrammetry and Remote Sensing (ISPRS) Potsdam segmentation benchmark and the Wuhan University (WHU) building benchmark were used to evaluate our approach, which, respectively, was found to achieve 90.19% and 93.33% intersection-over-union and top performance without using additional datasets, data augmentation, and post-processing. The EANet is effective in extracting buildings from aerial images, which shows that the quality of image segmentation can be improved by focusing on edge details.
引用
收藏
页数:18
相关论文
共 69 条
[1]   Automatic urban building boundary extraction from high resolution aerial images using an innovative model of active contours [J].
Ahmadi, Salman ;
Zoej, M. J. Valadan ;
Ebadi, Hamid ;
Moghaddam, Hamid Abrishami ;
Mohammadzadeh, Ali .
INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2010, 12 (03) :150-157
[2]  
[Anonymous], 2018, ECCV, DOI [DOI 10.1007/978-3-030-01234-249, 10.1007/978-3-030-01234-2_49]
[3]  
[Anonymous], 2017, ARXIV PREPRINT ARXIV, DOI DOI 10.48550/ARXIV.1706.05587
[4]  
[Anonymous], 2018, ISPRS 2D Semantic Labeling Contest
[5]  
[Anonymous], ARXIV181108201
[6]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[7]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[8]   Instance-aware Semantic Segmentation via Multi-task Network Cascades [J].
Dai, Jifeng ;
He, Kaiming ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3150-3158
[9]   A tutorial on the cross-entropy method [J].
De Boer, PT ;
Kroese, DP ;
Mannor, S ;
Rubinstein, RY .
ANNALS OF OPERATIONS RESEARCH, 2005, 134 (01) :19-67
[10]  
Goyal P., ARXIV170602677