A Novel Hybrid Method for Urban Green Space Segmentation from High-Resolution Remote Sensing Images

被引:6
作者
Wang, Wei [1 ,2 ]
Cheng, Yong [3 ]
Ren, Zhoupeng [1 ]
He, Jiaxin [2 ]
Zhao, Yingfen [4 ]
Wang, Jun [3 ]
Zhang, Wenjie [1 ,5 ]
机构
[1] Chinese Acad Sci, State Key Lab Resources & Environm Informat Syst, Inst Geog Sci & Nat Resources Res, Beijing 100101, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Sch Automat, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Sch Software, Nanjing 210044, Peoples R China
[4] China Ctr Resources Satellite Data & Applicat, Beijing 100094, Peoples R China
[5] Nanjing Univ Informat Sci & Technol, Sch Geog Sci, Nanjing 210044, Peoples R China
关键词
urban green space; deep learning; high-resolution remote sensing images; multiscale pooling attention; feature engineering; SEMANTIC SEGMENTATION; CITIES;
D O I
10.3390/rs15235472
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
The comprehensive use of high-resolution remote sensing (HRS) images and deep learning (DL) methods can be used to further accurate urban green space (UGS) mapping. However, in the process of UGS segmentation, most of the current DL methods focus on the improvement of the model structure and ignore the spectral information of HRS images. In this paper, a multiscale attention feature aggregation network (MAFANet) incorporating feature engineering was proposed to achieve segmentation of UGS from HRS images (GaoFen-2, GF-2). By constructing a new decoder block, a bilateral feature extraction module, and a multiscale pooling attention module, MAFANet enhanced the edge feature extraction of UGS and improved segmentation accuracy. By incorporating feature engineering, including false color image and the Normalized Difference Vegetation Index (NDVI), MAFANet further distinguished UGS boundaries. The UGS labeled datasets, i.e., UGS-1 and UGS-2, were built using GF-2. Meanwhile, comparison experiments with other DL methods are conducted on UGS-1 and UGS-2 to test the robustness of the MAFANet network. We found the mean Intersection over Union (MIOU) of the MAFANet network on the UGS-1 and UGS-2 datasets was 72.15% and 74.64%, respectively; outperforming other existing DL methods. In addition, by incorporating false color image in UGS-1, the MIOU of MAFANet was improved from 72.15% to 74.64%; by incorporating vegetation index (NDVI) in UGS-1, the MIOU of MAFANet was improved from 72.15% to 74.09%; and by incorporating false color image and the vegetation index (NDVI) in UGS-1, the MIOU of MAFANet was improved from 72.15% to 74.73%. Our experimental results demonstrated that the proposed MAFANet incorporating feature engineering (false color image and NDVI) outperforms the state-of-the-art (SOTA) methods in UGS segmentation, and the false color image feature is better than the vegetation index (NDVI) for enhancing green space information representation. This study provided a practical solution for UGS segmentation and promoted UGS mapping.
引用
收藏
页数:18
相关论文
共 46 条
  • [1] Green space and loneliness: A systematic review with theoretical and methodological guidance for future research
    Astell-Burt, Thomas
    Hartig, Terry
    Putra, I. Gusti Ngurah Edi
    Walsan, Ramya
    Dendup, Tashi
    Feng, Xiaoqi
    [J]. SCIENCE OF THE TOTAL ENVIRONMENT, 2022, 847
  • [2] SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
    Badrinarayanan, Vijay
    Kendall, Alex
    Cipolla, Roberto
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2481 - 2495
  • [3] A deep learning method for building height estimation using high-resolution multi-view imagery over urban areas: A case study of 42 Chinese cities
    Cao, Yinxia
    Huang, Xin
    [J]. REMOTE SENSING OF ENVIRONMENT, 2021, 264
  • [4] Contrasting inequality in human exposure to greenspace between cities of Global North and Global South
    Chen, Bin
    Wu, Shengbiao
    Song, Yimeng
    Webster, Chris
    Xu, Bing
    Gong, Peng
    [J]. NATURE COMMUNICATIONS, 2022, 13 (01)
  • [5] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
    Chen, Liang-Chieh
    Zhu, Yukun
    Papandreou, George
    Schroff, Florian
    Adam, Hartwig
    [J]. COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 833 - 851
  • [6] Multi-scale Feature Fusion and Transformer Network for urban green space segmentation from high-resolution remote sensing images
    Cheng, Yong
    Wang, Wei
    Ren, Zhoupeng
    Zhao, Yingfen
    Liao, Yilan
    Ge, Yong
    Wang, Jun
    He, Jiaxin
    Gu, Yakang
    Wang, Yixuan
    Zhang, Wenjie
    Zhang, Ce
    [J]. INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2023, 124
  • [7] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [8] An integrated methodology to assess the benefits of urban green space
    De Ridder, K
    Adamec, V
    Bañuelos, A
    Bruse, M
    Bürger, M
    Bamsgaard, O
    Dufek, J
    Hirsch, J
    Lefebre, F
    Pérez-Lacorzana, JM
    Thierry, A
    Weber, C
    [J]. SCIENCE OF THE TOTAL ENVIRONMENT, 2004, 334 : 489 - 497
  • [9] Addressing validation challenges for TROPOMI solar-induced chlorophyll fluorescence products using tower-based measurements and an NIRv-scaled approach
    Du, Shanshan
    Liu, Xinjie
    Chen, Jidai
    Duan, Weina
    Liu, Liangyun
    [J]. REMOTE SENSING OF ENVIRONMENT, 2023, 290
  • [10] Duta I.C., 2020, arXiv, DOI DOI 10.48550/ARXIV.2006.11538