Superpixel Segmentation Based on Clustering by Finding Density Peaks

被引:0
作者
Zhang Z.-L. [1 ,2 ]
Li A.-H. [2 ]
Li C.-W. [1 ]
机构
[1] School of Electronic Science, National University of Defense Technology, Changsha
[2] College of Science and Mathematics, Montclair State University, 07043, NJ
来源
Jisuanji Xuebao/Chinese Journal of Computers | 2020年 / 43卷 / 01期
基金
中国国家自然科学基金;
关键词
Clustering; Density peaks; Segmentation; Superpixel; Tree;
D O I
10.11897/SP.J.1016.2020.00001
中图分类号
学科分类号
摘要
Superpixel segmentation has been widely used as a pre-processing procedure in many computer vision applications, such as object tracking, 3D reconstruction, visual saliency estimation, object detection, and medical image segmentation. In this paper, we present a novel superpixel segmentation algorithm, which is inspired by a new clustering method published in Science in 2014. Our algorithm produces superpixels through clustering pixels by searching and finding density peaks. This algorithm consists of five steps. Firstly, we estimate the density for each pixel in a local circular neighborhood in the image plane. Density is formulated as a weighted similarity between the central pixel and all the surrounding ones, while the radius of the neighborhood can be determined experientially by the superpixel number that we desired. Secondly, we search the nearest pixel with a larger density for each pixel and calculate the distance between them. For each pixel, the index of the nearest pixel and the distance to it are two attributes named as ascription and distance, respectively. In the third step, we construct an ascription relation tree and assemble all the pixels into the tree based on their distances and ascriptions. A leaf of the tree represents a pixel. A directed edge in the tree starts from a pixel and arrives at its ascription and is weighted by the corresponding distance. The tree reflects the ascription relationship among all the pixels in the input image. In the fourth step, we select several pixels with large densities and distances as the seeds of superpixels. Then we assign each seed a unique label in the tree. In the final step, by searching the tree, we find the closest superpixel seed for each pixel and assign the label of the seed to it. Our algorithm has many advantages. It is flexible because it can automatically select the seeds and accurately control the number and the size of the superpixels it produced. It is fast because no iterative optimization is involved in the algorithm. Its computational complexity does not rely on the number of superpixels. We compare our algorithm to nine state-of-the-art methods on two benchmark datasets, BSDS300 and BSDS500. The comparison methods are categorized into two main types: 5 classical methods (SLIC, QS, DB, NC, and GB) and 4 latest canonical ones (LSC, ERS, SEEDS, and FLIC). We conducted extensive experiments to confirm the performance of our algorithm. We start with qualitatively examine the superpixels of all the methods. The examination results reveal that the superpixels of our algorithm adhere to the ground-truth edges more accurately than the other methods. To confirm the superior performance of our algorithm, we further evaluate all the methods quantitatively by their scores on four widely used measures: the boundary recall rate, the under-segmentation error, the achievable segmentation accuracy, and the computational complexity. The evaluation results reveal that our algorithm significantly outperforms the 5 classical methods. It also achieves better or similar scores than the 4 latest canonical methods. The runtime of our algorithm does not rise as the number of superpixels increases. © 2020, Science Press. All right reserved.
引用
收藏
页码:1 / 15
页数:14
相关论文
共 33 条
[1]  
Ren X., Malik J., Learning a classification model for segmentation, Proceedings of the 9th IEEE International Conference on Computer Vision, pp. 10-17, (2003)
[2]  
Wang S., Lu H., Yang F., Yang M.-H., Superpixel tracking, Proceedings of the 13th International Conference on Computer Vision, pp. 1323-1330, (2011)
[3]  
Yang F., Lu H., Yang M.-H., Robust superpixel tracking, IEEE Transactions on Image Processing, 23, 4, pp. 1639-1651, (2014)
[4]  
Bodis-Szomoru A., Riemenschneider H., Van Gool L., Superpixel meshes for fast edge-preserving surface reconstruction, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2011-2020, (2015)
[5]  
Perazzi F., Krahenbuhl P., Pritch Y., Hornung A., Saliency filters: Contrast based filtering for salient region detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 733-740, (2012)
[6]  
He S., Lau R.W.H., Liu W., Et al., SuperCNN: A superpixelwise convolutional neural network for salient object detection, International Journal of Computer Vision, 115, 3, pp. 330-344, (2015)
[7]  
Shu G., Dehghan A., Shah M., Improving an object detector and extracting regions using superpixels, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3721-3727, (2013)
[8]  
Yan J., Yu Y., Zhu X., Et al., Object detection by labeling superpixels, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5107-5116, (2015)
[9]  
Andres B., Kothe U., Helmstaedter M., Et al., Segmentation of SBFSEM volume data of neural tissue by hierarchical classification, Proceedings of the Annual Symposium of the German Association for Pattern Recognition(DAGM008), pp. 142-152, (2008)
[10]  
Lucchi A., Smith K., Achanta R., Et al., A fully automated approach to segmentation of irregularly shaped cellular structures in EM images, Proceedings of the International Conference on Medical Image Computing and Computer Assisted Interventions, pp. 463-471, (2010)