VEDAM: Urban Vegetation Extraction Based on Deep Attention Model from High-Resolution Satellite Images

被引:5
作者
Yang, Bin [1 ]
Zhao, Mengci [2 ]
Xing, Ying [2 ]
Zeng, Fuping [3 ]
Sun, Zhaoyang [4 ]
机构
[1] China Unicom Res Inst, Beijing 100048, Peoples R China
[2] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing 100876, Peoples R China
[3] Beihang Univ, Sch Reliabil & Syst Engn, Beijing 100191, Peoples R China
[4] China Natl Inst Standardizat, Beijing 100191, Peoples R China
关键词
vegetation extraction; satellite image; semantic segmentation; attention; integrated satellite-terrestrial; SEGMENTATION; NETWORK;
D O I
10.3390/electronics12051215
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of satellite and internet of things (IoT) technology, it becomes more and more convenient to acquire high-resolution satellite images from the ground. Extraction of urban vegetation from high-resolution satellite images can provide valuable suggestions for the decision-making of urban management. At present, deep-learning semantic segmentation has become an important method for vegetation extraction. However, due to the poor representation of context and spatial information, the effect of segmentation is not accurate. Thus, vegetation extraction based on Deep Attention Model (VEDAM) is proposed to enhance the context and spatial information representation ability in the scenario of vegetation extraction from satellite images. Specifically, continuous convolutions are used for feature extraction, and atrous convolutions are introduced to obtain more multi-scale context information. Then the extracted features are enhanced by the Spatial Attention Module (SAM) and the atrous spatial pyramid convolution functions. In addition, image-level feature obtained by image pooling encoding global context further improves the overall performance. Experiments are conducted on real datasets Gaofen Image Dataset (GID). From the comparative experimental results, it is concluded that VEDAM achieves the best mIoU (mIoU = 0.9136) of vegetation semantic segmentation.
引用
收藏
页数:17
相关论文
共 72 条
[1]   Application of Deep Belief Network to Land Cover Classification Using Hyperspectral Images [J].
Ayhan, Bulent ;
Kwan, Chiman .
ADVANCES IN NEURAL NETWORKS, PT I, 2017, 10261 :269-276
[2]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[3]   A Lightweight Deep Learning Architecture for Vegetation Segmentation using UAV-captured Aerial Images [J].
Behera, Tanmay Kumar ;
Bakshi, Sambit ;
Sa, Pankaj Kumar .
SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2023, 37
[4]   Drone Image Segmentation Using Machine and Deep Learning for Mapping Raised Bog Vegetation Communities [J].
Bhatnagar, Saheba ;
Gill, Laurence ;
Ghosh, Bidisha .
REMOTE SENSING, 2020, 12 (16)
[5]  
Bidolah DI., 2020, Ukr. J. Wood Sci, V11, P4
[6]  
Bidolakh D.I., 2019, J BACTERIOL, V10, P19, DOI [10.31548/forest2019.03.019, DOI 10.31548/FOREST2019.03.019]
[7]   Recent Advancements in Learning Algorithms for Point Clouds: An Updated Overview [J].
Camuffo, Elena ;
Mari, Daniele ;
Milani, Simone .
SENSORS, 2022, 22 (04)
[8]   Dynamic image segmentation algorithm in 3D descriptions of remote sensing images [J].
Chen, Ching-Yi ;
Feng, Hsuan-Ming ;
Chen, Hua-Ching ;
Jou, Shiang-Min .
MULTIMEDIA TOOLS AND APPLICATIONS, 2016, 75 (16) :9723-9743
[9]  
Chen LC, 2017, Arxiv, DOI arXiv:1706.05587
[10]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851