Building extraction based on hyperspectral remote sensing images and semisupervised deep learning with limited training samples

被引:2
作者
Hui, He [1 ]
Ya-Dong, Sun [1 ]
Bo-Xiong, Yang [2 ]
Mu-Xi, Xie [1 ]
She-Lei, Li [2 ]
Bo, Zhou [2 ]
Kai-Cun, Zhang [3 ]
机构
[1] Beijing Normal Univ, Adv Inst Nat Sci, Zhuhai 519087, Peoples R China
[2] Univ Sanya, Sch Informat & Intelligence Engn, Sanya 572022, Peoples R China
[3] Univ Sanya, Academician Chen Guoliang Team Innovat Ctr, Sanya 572022, Peoples R China
基金
海南省自然科学基金; 中国国家自然科学基金;
关键词
Building extraction; Limited training samples; Semantic segmentation model; Attention mechanism; Hyperspectral remote sensing; CLASSIFICATION; NETWORK; ATTENTION;
D O I
10.1016/j.compeleceng.2023.108851
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Hyperspectral remote sensing imaging technology provides assistance in various aspects of daily life through applications such as urban building information statistics and green vegetation estimation. Ensuring the accuracy of automatic thematic information extraction under limited samples is a challenge. In this manuscript, a lightweight semantic segmentation model based on the "encoder-decoder" structure is proposed for extracting buildings from hyperspectral remote sensing images. The proposed model employs the lightweight MobileNet combined with multi scale feature fusion and a group dilated convolution for modelling both shallow and deep spatial and spectral features as the encoder and an efficient combined standardized attention mechanism for selecting the most valuable bands and local information. Extensive experiments reveal that our method produces greater accuracy than state-of-the-art lightweight models in building extraction tasks. We also demonstrated the superiority of our method for insufficient training sample sizes. When only 50% of the samples of the initial training set were used, the mean intersection over union (mIOU) reached 91.90%, 4.5% higher than that of the next best method. For training sets composed of only 16 and 8 images, the mIOU values were 89.42 and 77.11%, respectively, 13.6 and 18 percentage points higher than that of the next best method. According to the visualization of the results, the proposed method obviously outperformed the compared methods. The model proposed in this paper is suitable for accurately extracting buildings from hyperspectral images in situations involving limited training samples.
引用
收藏
页数:12
相关论文
共 30 条
[1]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[2]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[3]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[4]   Collaborative learning of lightweight convolutional neural network and deep clustering for hyperspectral image semi-supervised classification with limited training samples [J].
Fang, Bei ;
Li, Ying ;
Zhang, Haokui ;
Chan, Jonathan Cheung-Wai .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2020, 161 :164-178
[5]   Efficient Semantic Segmentation of Hyperspectral Images Using Adaptable Rectangular Convolution [J].
Garcia, Jose L. ;
Paoletti, Mercedes E. ;
Jimenez, Luis, I ;
Haut, Juan M. ;
Plaza, Antonio .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
[6]   Semisupervised image classification with Laplacian support vector machines [J].
Gomez-Chova, Luis ;
Camps-Valls, Gustavo ;
Munoz-Mari, Jordi ;
Calpe, Javier .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2008, 5 (03) :336-340
[7]  
Guo MH, 2022, 36 C NEURAL INFORM P, P1
[8]  
Howard A. G, 2017, MOBILENETS EFFICIENT
[9]  
[李传林 Li Chuanlin], 2021, [地球信息科学学报, Journal of Geo-Information Science], V23, P2232
[10]   Semantic Segmentation of Hyperspectral Remote Sensing Images Based on PSE-UNet Model [J].
Li, Jiaju ;
Wang, Hefeng ;
Zhang, Anbing ;
Liu, Yuliang .
SENSORS, 2022, 22 (24)