Multiscale target extraction using a spectral saliency map for a hyperspectral image

被引:6
作者
Zhang, Jing [1 ]
Geng, Wenhao [1 ]
Zhuo, Li [1 ]
Tian, Qi [2 ]
Cao, Yan [1 ]
机构
[1] Beijing Univ Technol, Signal & Informat Proc Lab, Beijing 100124, Peoples R China
[2] Univ Texas San Antonio, Dept Comp Sci, San Antonio, TX 78249 USA
基金
中国国家自然科学基金;
关键词
OBJECT DETECTION; SEGMENTATION METHOD; VISUAL-ATTENTION; FUSION; SCENE; MODEL;
D O I
10.1364/AO.55.008089
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
With the rapid growth of the capabilities for hyperspectral imagery acquisition, how to efficiently find the significant target in hyperspectral imagery has become a fundamental task for remote-sensing applications. Existing target extraction methods mainly separate targets from background with a threshold based on pixels and single-scale image information extraction. However, due to the high dimensional characteristics and the complex background of hyperspectral imagery, it is difficult to obtain good extraction results with existing methods. Saliency detection has been a promising topic because saliency features can quickly locate saliency regions from complex backgrounds. Considering the spatial and spectral characteristics of a hyperspectral image, a multiscale target extraction method using a spectral saliency map is proposed for a hyperspectral image, which includes: (1) a spectral saliency model is constructed for detecting spectral saliency map in a hyperspectral image; (2) focus of attention (FOA) as the seed point is competed in the spectral saliency map by the winner-take-all (WTA) network; (3) the multiscale image is segmented by region growing based on the minimum-heterogeneity rule after calculating the heterogeneity of the seed point with its surrounding pixels; (4) the salient target is detected and segmented under the constraint of the spectral saliency map. The experimental results show that the proposed method can effectively improve the accuracy of target extraction for hyperspectral images. (C) 2016 Optical Society of America
引用
收藏
页码:8089 / 8100
页数:12
相关论文
共 27 条
[1]  
[Anonymous], 1987, Shifts in selective visual attention: Towards the underlying neural circuitry. matters of intelligence
[2]   Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information [J].
Benz, UC ;
Hofmann, P ;
Willhauck, G ;
Lingenfelder, I ;
Heynen, M .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2004, 58 (3-4) :239-258
[3]  
Cao Y, 2015, 2015 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING, P1086, DOI 10.1109/ChinaSIP.2015.7230572
[4]   MEAN SHIFT, MODE SEEKING, AND CLUSTERING [J].
CHENG, YZ .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1995, 17 (08) :790-799
[5]   An image segmentation method based on the fusion of vector quantization and edge detection with applications to medical image processing [J].
De, Ailing ;
Guo, Chengan .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2014, 5 (04) :543-551
[6]   Automated parameterisation for multi-scale image segmentation on multiple layers [J].
Dragut, L. ;
Csillik, O. ;
Eisank, C. ;
Tiede, D. .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2014, 88 :119-127
[7]   New hyperspectral discrimination measure for spectral characterization [J].
Du, YZ ;
Chang, CI ;
Ren, H ;
Chang, CC ;
Jensen, JO ;
D'Amico, FM .
OPTICAL ENGINEERING, 2004, 43 (08) :1777-1786
[8]   On the plausibility of the discriminant center-surround hypothesis for visual saliency [J].
Gao, Dashan ;
Mahadevan, Vijay ;
Vasconcelos, Nuno .
JOURNAL OF VISION, 2008, 8 (07)
[9]   Iris segmentation using an edge detector based on fuzzy sets theory and cellular learning automata [J].
Ghanizadeh, Afshin ;
Abarghouei, Amir Atapour ;
Sinaie, Saman ;
Saad, Puteh ;
Shamsuddin, Siti Mariyam .
APPLIED OPTICS, 2011, 50 (19) :3191-3200
[10]   Region-Based Convolutional Networks for Accurate Object Detection and Segmentation [J].
Girshick, Ross ;
Donahue, Jeff ;
Darrell, Trevor ;
Malik, Jitendra .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (01) :142-158