Unsupervised RGB-T saliency detection by node classification distance and sparse constrained graph learning

被引:13
作者
Gong, Aojun [1 ]
Huang, Liming [1 ]
Shi, Jiashun [1 ]
Liu, Chuang [2 ]
机构
[1] Northeastern Univ, Sch Mech Engn & Automat, Shenyang, Liaoning, Peoples R China
[2] Shijiazhuang Tiedao Univ, Sch Mech Engn, Shijiazhuang, Hebei, Peoples R China
关键词
RGB-T images; Unsupervised saliency detection; Node classification distance; Sparse constrained graph learning;
D O I
10.1007/s10489-021-02434-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Saliency detection methods which center on RGB images are sensitive to surrounding environments. Fusing complementary RGB and thermal infrared (RGB-T) images is an effective way to promote the final saliency performance. However, there are relatively few datasets and algorithms for RGB-T saliency detection, which is the prominent problem in this research field. Therefore, an unsupervised method that does not require a large amount of labeled data is proposed for RGB-T image saliency detection in this paper. At first, we construct the graph model in which the superpixels are treated as graph nodes. Instead of utilizing the Euclidean distance to construct initial affinity matrix, a novel node classification distance is designed to explore the local relationship and graph geometrical structure of nodes. Additionally, the advantageous constraint is proposed to increase the sparsity of the image, which not only makes the initial affinity matrix sparse and accurate but also enhances the foreground or background consistency during the graph learning. Furthermore, an adaptive ranking algorithm fusing classification distance and sparse constraint is used to unify the graph affinity learning and the computation of saliency values, which helps to generate more accurate saliency results. Experiments on two public RGB-T datasets demonstrate that the applied method performs desirably against the state-of-the-art algorithms.
引用
收藏
页码:1030 / 1043
页数:14
相关论文
共 37 条
[21]   RGBD Salient Object Detection: A Benchmark and Algorithms [J].
Peng, Houwen ;
Li, Bing ;
Xiong, Weihua ;
Hu, Weiming ;
Ji, Rongrong .
COMPUTER VISION - ECCV 2014, PT III, 2014, 8691 :92-109
[22]   RGBT Salient Object Detection: Benchmark and A Novel Cooperative Ranking Approach [J].
Tang, Jin ;
Fan, Dongzhe ;
Wang, Xiaoxiao ;
Tu, Zhengzheng ;
Li, Chenglong .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (12) :4421-4433
[23]  
Tu Z., 2020, ARXIV200502315
[24]   RGB-T Image Saliency Detection via Collaborative Graph Learning [J].
Tu, Zhengzheng ;
Xia, Tian ;
Li, Chenglong ;
Wang, Xiaoxiao ;
Ma, Yan ;
Tang, Jin .
IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (01) :160-173
[25]  
Wang Guizhao, 2018, Image and Graphics Technologies and Applications: 13th Conference on Image and Graphics Technologies and Applications, IGTA 2018, Beijing, China, April 8-10, 2018, Revised Selected Papers. Communications in Computer and Information Science (875), P359, DOI 10.1007/978-981-13-1702-6_36
[26]   Salient object detection based on distribution-edge guidance and iterative Bayesian optimization [J].
Xia, Chenxing ;
Gao, Xiuju ;
Li, Kuan-Ching ;
Zhao, Qianjin ;
Zhang, Shunxiang .
APPLIED INTELLIGENCE, 2020, 50 (10) :2977-2990
[27]   Saliency Detection via Graph-Based Manifold Ranking [J].
Yang, Chuan ;
Zhang, Lihe ;
Lu, Huchuan ;
Ruan, Xiang ;
Yang, Ming-Hsuan .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :3166-3173
[28]   Top-Down Visual Saliency via Joint CRF and Dictionary Learning [J].
Yang, Jimei ;
Yang, Ming-Hsuan .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (03) :576-588
[29]   Fast Grayscale-Thermal Foreground Detection With Collaborative Low-Rank Decomposition [J].
Yang, Sen ;
Luo, Bin ;
Li, Chenglong ;
Wang, Guizhao ;
Tang, Jin .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (10) :2574-2585
[30]   Traffic sign detection based on visual co-saliency in complex scenes [J].
Yu, Lingli ;
Xia, Xumei ;
Zhou, Kaijun .
APPLIED INTELLIGENCE, 2019, 49 (02) :764-790