E-Unet: a atrous convolution-based neural network for building extraction from high-resolution remote sensing images

被引:0
作者
He Z. [1 ]
Ding H. [1 ]
An B. [1 ]
机构
[1] School of Remote Sensing & Geomatics Engineering, Nanjing University of Information Science and Technology, Nanjing
来源
Cehui Xuebao/Acta Geodaetica et Cartographica Sinica | 2022年 / 51卷 / 03期
基金
中国国家自然科学基金;
关键词
Atrous convolution; Building extraction; Deep learning; E-Unet; High resolution remote sensing image;
D O I
10.11947/j.AGCS.2022.20200601
中图分类号
学科分类号
摘要
The utilization of high-resolution remote sensing images to extract urban buildings is one of the current research hotspots, but owing to the different colors, shapes and sizes of buildings, and a wide range of details, the extraction results generally suffer from blurred edges, rounded corners and loss of details. For this reason, this study proposes an E-Unet deep learning network based on cavity convolution. In the structural design, jump connections are introduced to reduce the detail loss of edges and corners; a newly designed convolution module is adopted to expand the perceptual field while reducing the number of parameters; a Dropout module is added to the bottom layer of the network to avoid overfitting; histogram equalization, Gaussian bilateral filtering and inter-band ratio operations are performed on the raw data, which are then combined into a multi-band tensor input network(without conversion to grey-scale images). To validate the network performance and clarify the reasons for the performance improvement, two sets of experiments were designed in this study on the Massachusetts and WHU building datasets. The first set of experiments is a comparison experiment between the E-Unet, Unet and Res-net networks. The results show that E-Unet not only outperforms Unet and ResNet in all accuracy evaluation metrics, but also has high fidelity in the details of the extraction results. The second set of experiments are pre-processing stripping experiments to clarify the performance improvement of the network itself and the pre-processing module. The effectiveness of the pre-processing module and the superiority of the proposed network in this research are demonstrated by the two sets of experiments. © 2022, Surveying and Mapping Press. All right reserved.
引用
收藏
页码:457 / 467
页数:10
相关论文
共 30 条
[1]  
CHEN Yang, FAN Rongshuang, WANG Jingxue, Et al., Cloud detection of ZY-3 satellite remote sensing images based on deep learning, Acta Optica Sinica, 38, 1, (2018)
[2]  
GONG Jianya, JI Shunping, Photogrammetry and deep learning, Acta Geodaetica et Cartographica Sinica, 47, 6, pp. 693-704, (2018)
[3]  
SZEGEDY C, LIU Wei, JIA Yangqing, Et al., Going deeper with convolutions, Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9, (2015)
[4]  
KHRYASHCHEV V, IVANOVSKY L, PAVLOV V, Et al., Comparison of different convolutional neural network architectures for satellite image segmentation, Proceedings of the 23rd Conference of Open Innovations Association (FRUCT), (2018)
[5]  
CUI Weihong, XIONG Baoyu, ZHANG Liyao, Multi-scale fully convolutional neural network for building extraction, Acta Geodaetica et Cartographica Sinica, 48, 5, pp. 597-608, (2019)
[6]  
HE Hao, WANG Shicheng, YANG Dongfang, Et al., An road extraction method for remote sensing image based on Encoder-Decoder network, Acta Geodaetica et Cartographica Sinica, 48, 3, pp. 330-338, (2019)
[7]  
FAN Rongshuang, CHEN Yang, XU Qiheng, Et al., A high-resolution remote sensing image building extraction method based on deep learning, Acta Geodaetica et Cartographica Sinica, 48, 1, pp. 34-41, (2019)
[8]  
LECUN Y, BOTTOU L, BENGIO Y, Et al., Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86, 11, pp. 2278-2324, (1998)
[9]  
KRIZHEVSKY A, SUTSKEVER I, HINTON G E., ImageNet classification with deep convolutional neural networks, Communications of the ACM, 60, 6, pp. 84-90, (2017)
[10]  
SHELHAMER E, LONG J, DARRELL T., Fully convolutional networks for semantic segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 4, pp. 640-651, (2017)