Automatic Extraction of Built-Up Areas from Very High-Resolution Satellite Imagery Using Patch-Level Spatial Features and Gestalt Laws of Perceptual Grouping

被引:4
作者
Chen, Yixiang [1 ]
Lv, Zhiyong [2 ]
Huang, Bo [3 ]
Zhang, Pengdong [1 ]
Zhang, Yu [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Dept Surveying & Geoinformat, Nanjing 210023, Peoples R China
[2] Xian Univ Technol, Sch Comp Sci & Engn, Xian 710048, Peoples R China
[3] Chinese Univ Hong Kong, Dept Geog & Resource Management, Hong Kong 999077, Peoples R China
基金
中国国家自然科学基金;
关键词
high-resolution; satellite image; built-up area extraction; gestalt laws of grouping; LAND-COVER CHANGE; VISUAL-ATTENTION; SALIENCY DETECTION; HUMAN-SETTLEMENTS; PRESENCE INDEX; OPTIMIZATION; THRESHOLD; SPACE; MODEL;
D O I
10.3390/rs11243022
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Automatic extraction of built-up areas from very high-resolution (VHR) satellite images has received increasing attention in recent years. However, due to the complexity of spectral and spatial characteristics of built-up areas, it is still a challenging task to obtain their precise location and extent. In this study, a patch-based framework was proposed for unsupervised extraction of built-up areas from VHR imagery. First, a group of corner-constrained overlapping patches were defined to locate the candidate built-up areas. Second, for each patch, its salient textures and structural characteristics were represented as a feature vector using integrated high-frequency wavelet coefficients. Then, inspired by visual perception, a patch-level saliency model of built-up areas was constructed by incorporating Gestalt laws of proximity and similarity, which can effectively describe the spatial relationships between patches. Finally, built-up areas were extracted through thresholding and their boundaries were refined by morphological operations. The performance of the proposed method was evaluated on two VHR image datasets. The resulting average F-measure values were 0.8613 for the Google Earth dataset and 0.88 for the WorldView-2 dataset, respectively. Compared with existing models, the proposed method obtains better extraction results, which show more precise boundaries and preserve better shape integrity.
引用
收藏
页数:22
相关论文
共 62 条
[11]  
Harris C., 1988, ALVEY VISION C, P147151
[12]   Curvature scale space corner detector with adaptive threshold and dynamic region of support [J].
He, XC ;
Yung, NHC .
PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 2, 2004, :791-794
[13]   Local Edge Distributions for Detection of Salient Structure Textures and Objects [J].
Hu, Xiangyun ;
Shen, Jiajie ;
Shan, Jie ;
Pan, Li .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2013, 10 (03) :466-470
[14]   Representation of Block-Based Image Features in a Multi-Scale Framework for Built-Up Area Detection [J].
Hu, Zhongwen ;
Li, Qingquan ;
Zhang, Qian ;
Wu, Guofeng .
REMOTE SENSING, 2016, 8 (02)
[15]   An SVM Ensemble Approach Combining Spectral, Structural, and Semantic Features for the Classification of High-Resolution Remotely Sensed Imagery [J].
Huang, Xin ;
Zhang, Liangpei .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2013, 51 (01) :257-272
[16]   Morphological Building/Shadow Index for Building Extraction From High-Resolution Imagery Over Urban Areas [J].
Huang, Xin ;
Zhang, Liangpei .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2012, 5 (01) :161-172
[17]   A Multidirectional and Multiscale Morphological Index for Automatic Building Extraction from Multispectral GeoEye-1 Imagery [J].
Huang, Xin ;
Zhang, Liangpei .
PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, 2011, 77 (07) :721-732
[18]   A Saliency Detection Model Using Low-Level Features Based on Wavelet Transform [J].
Imamoglu, Nevrez ;
Lin, Weisi ;
Fang, Yuming .
IEEE TRANSACTIONS ON MULTIMEDIA, 2013, 15 (01) :96-105
[19]   A model of saliency-based visual attention for rapid scene analysis [J].
Itti, L ;
Koch, C ;
Niebur, E .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1998, 20 (11) :1254-1259
[20]   Computational modelling of visual attention [J].
Itti, L ;
Koch, C .
NATURE REVIEWS NEUROSCIENCE, 2001, 2 (03) :194-203