MAP-Net: Multiple Attending Path Neural Network for Building Footprint Extraction From Remote Sensed Imagery

被引:208
作者
Zhu, Qing [1 ]
Liao, Cheng [1 ]
Hu, Han [1 ]
Mei, Xiaoming [2 ]
Li, Haifeng [2 ]
机构
[1] Southwest Jiaotong Univ, Fac Geosci & Environm Engn, Chengdu 611756, Peoples R China
[2] Cent South Univ, Sch Geosci & Infophys, Changsha 410083, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2021年 / 59卷 / 07期
基金
中国国家自然科学基金;
关键词
Feature extraction; Buildings; Semantics; Data mining; Spatial resolution; Remote sensing; Convolution; Attention mechanism; building footprint extraction; deep learning; remote sensing imagery; semantic segmentation; SEMANTIC SEGMENTATION; AERIAL IMAGERY; SENSING IMAGERY; DATA FUSION; LIDAR DATA; POINT;
D O I
10.1109/TGRS.2020.3026051
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Building footprint extraction is a basic task in the fields of mapping, image understanding, computer vision, and so on. Accurately and efficiently extracting building footprints from a wide range of remote sensed imagery remains a challenge due to the complex structures, variety of scales, and diverse appearances of buildings. Existing convolutional neural network (CNN)-based building extraction methods are criticized for their inability to detect tiny buildings because the spatial information of CNN feature maps is lost during repeated pooling operations of the CNN. In addition, large buildings still have inaccurate segmentation edges. Moreover, features extracted by a CNN are always partially restricted by the size of the receptive field, and large-scale buildings with low texture are always discontinuous and holey when extracted. To alleviate these problems, multiscale strategies are introduced in the latest research works to extract buildings with different scales. The features with higher resolution generally extracted from shallow layers, which extracted insufficient semantic information for tiny buildings. This article proposes a novel multiple attending path neural network (MAP-Net) for accurately extracting multiscale building footprints and precise boundaries. Unlike existing multiscale feature extraction strategies, MAP-Net learns spatial localization-preserved multiscale features through a multiparallel path in which each stage is gradually generated to extract high-level semantic features with fixed resolution. Then, an attention module adaptively squeezes the channel-wise features extracted from each path for optimized multiscale fusion, and a pyramid spatial pooling module captures global dependence for refining discontinuous building footprints. Experimental results show that our method achieved 0.88%, 0.93%, and 0.45% F1-score and 1.53%, 1.50%, and 0.82% intersection over union (IoU) score improvements without increasing computational complexity compared with the latest HRNetv2 on the Urban 3-D, Deep Globe, and WHU data sets, respectively. Specifically, MAP-Net outperforms multiscale aggregation fully convolutional network (MA-FCN), which is the state-of-the-art (SOTA) algorithms with postprocessing and model voting strategies, on the WHU data set without pretraining and postprocessing. The TensorFlow implementation is available at https://github.com/lehaifeng/MAPNet.
引用
收藏
页码:6169 / 6181
页数:13
相关论文
共 51 条
[11]   DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images [J].
Demir, Ilke ;
Koperski, Krzysztof ;
Lindenbaum, David ;
Pang, Guan ;
Huang, Jing ;
Bast, Saikat ;
Hughes, Forest ;
Tuia, Devis ;
Raskar, Ramesh .
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, :172-181
[12]   Multi-scale object detection in remote sensing imagery with convolutional neural networks [J].
Deng, Zhipeng ;
Sun, Hao ;
Zhou, Shilin ;
Zhao, Juanping ;
Lei, Lin ;
Zou, Huanxin .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2018, 145 :3-22
[13]   A Novel Framework for 2.5-D Building Contouring From Large-Scale Residential Scenes [J].
Du, Jianli ;
Chen, Dong ;
Wang, Ruisheng ;
Peethambaran, Jiju ;
Mathiopoulos, P. Takis ;
Xie, Lei ;
Yun, Ting .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (06) :4121-4145
[14]   Automatic building extraction from LiDAR data fusion of point and grid-based features [J].
Du, Shouji ;
Zhang, Yunsheng ;
Zou, Zhengrong ;
Xu, Shenghua ;
He, Xue ;
Chen, Siyang .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2017, 130 :294-307
[15]   Dual Attention Network for Scene Segmentation [J].
Fu, Jun ;
Liu, Jing ;
Tian, Haijie ;
Li, Yong ;
Bao, Yongjun ;
Fang, Zhiwei ;
Lu, Hanqing .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3141-3149
[16]   Automatic building footprint extraction from high-resolution satellite image using mathematical morphology [J].
Gavankar, Nitin L. ;
Ghosh, Sanjay Kumar .
EUROPEAN JOURNAL OF REMOTE SENSING, 2018, 51 (01) :182-193
[17]   An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage [J].
Gilani, Syed Ali Naqi ;
Awrangjeb, Mohammad ;
Lu, Guojun .
REMOTE SENSING, 2016, 8 (03)
[18]   Urban 3D Challenge: Building Footprint Detection Using Orthorectified Imagery and Digital Surface Models from Commercial Satellites [J].
Goldberg, Hirsh R. ;
Wang, Sean ;
Christie, Gordon A. ;
Brown, Myron Z. .
GEOSPATIAL INFORMATICS, MOTION IMAGERY, AND NETWORK ANALYTICS VIII, 2018, 10645
[19]  
Hasan SMK, 2019, IEEE ENG MED BIO, P7205, DOI [10.1109/EMBC.2019.8856791, 10.1109/embc.2019.8856791]
[20]   Adaptive Pyramid Context Network for Semantic Segmentation [J].
He, Junjun ;
Deng, Zhongying ;
Zhou, Lei ;
Wang, Yali ;
Qiao, Yu .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7511-7520