Deep supervised visual saliency model addressing low-level features

被引:1
作者
Zhou L. [1 ]
Gu X. [1 ]
机构
[1] Department of Electronic Engineering, Fudan University, Shanghai
基金
中国国家自然科学基金;
关键词
Fully convolutional networks; Low-level features; Pulse coupled neural networks; Visual saliency;
D O I
10.1007/s12652-019-01441-9
中图分类号
学科分类号
摘要
Deep neural networks detect visual saliency with semantic information. These high-level features locate salient regions efficiently but pay less attention to structure preservation. In our paper, we emphasize crucial low-level features for deep neural networks in order to preserve local structure and integrity of objects. The proposed framework consists of an image enhancement network and a saliency prediction network. In the first part of our model, we segment the image with a superpixel based unit-linking pulse coupled neural network (PCNN) and generate a weight map representing contrast and spatial properties. With the help of these low-level features, a fully convolutional network (FCN) is employed to compute saliency map in the second part. The weight map enhances the input channels of the FCN, meanwhile refines the output prediction with polished details and contours of salient objects. We demonstrate the superior performance of our model against other state-of-the-art approaches through experimental results on five benchmark datasets. © 2019, Springer-Verlag GmbH Germany, part of Springer Nature.
引用
收藏
页码:15659 / 15672
页数:13
相关论文
共 54 条
[1]  
Achanta R., Hemami S., Estrada F., Susstrunk S., Frequency-tuned salient region detection, Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition, pp. 1597-1604, (2009)
[2]  
Achanta R., Shaji A., Smith K., Luchhi A., Fua P., Susstrunk S., SLIC superpixels compared to state-of-the-art superpixel methods, Proc IEEE Trans Pattern Anal Mach Intell, 34, 11, pp. 2274-2282, (2012)
[3]  
Akram T., Khan M.A., Sharif M., Yasmin M., Skin lesion segmentation and recognition using multichannel saliency estimation and M-SVM on selected serially fused features, J Ambient Intell Human Comput, (2018)
[4]  
Alpert S., Galun M., Basri R., Brandt A., Image segmentation by probabilistic bottom–up aggregation and cue integration, IEEE Trans Pattern Anal Mach Intell, 34, 2, pp. 315-327, (2012)
[5]  
Amin-Naji M., Aghagolzadeh A., Ezoji M., CNNs hard voting for multi-focus image fusion, J Ambient Intell Human Comput, (2019)
[6]  
Borji A., Cheng M.M., Hou Q., Jiang H., Li J., Salient object detection: a benchmark, IEEE Trans Image Proc, 24, 12, pp. 5706-5722, (2015)
[7]  
Chen T., Lin L., Liu L., Luo X., Li X., DISC: deep image saliency computing via progressive representation learning, IEEE Trans Neural Netw Learn Syst, 27, 6, pp. 1135-1149, (2016)
[8]  
Cheng M.M., Zhang G.X., Mitra N.J., Huang X., Hu S.M., Global contrast based salient region detection, Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition, pp. 409-416, (2011)
[9]  
Donoser M., Urschler M., Hirzer M., Bischof H., Saliency driven total variation segmentation, In Proceedings of the IEEE International Conference on Computer Vision, pp. 817-824, (2009)
[10]  
Eckhorn R., Reitboeck H.J., Arndt M., Dicke P., Feature linking via synchronization among distributed assemblies: simulations of results from cat visual cortex, Neural Comput, 2, 3, pp. 293-307, (1990)