Differentiable Bi-Sparse Multi-View Co-Clustering

被引:37
作者
Du, Shide [1 ,2 ]
Liu, Zhanghui [1 ,2 ]
Chen, Zhaoliang [1 ,2 ]
Yang, Wenyuan [3 ]
Wang, Shiping [1 ,2 ]
机构
[1] Fuzhou Univ, Coll Math & Comp Sci, Fuzhou 350116, Peoples R China
[2] Fuzhou Univ, Fujian Prov Key Lab Network Comp & Intelligent In, Fuzhou 350116, Peoples R China
[3] Minnan Normal Univ, Fujian Key Lab Granular Comp & Applicat, Zhangzhou 363000, Peoples R China
基金
中国国家自然科学基金;
关键词
Collaboration; Deep learning; multi-view clustering; co-clustering; sparse representation; differentiable blocks;
D O I
10.1109/TSP.2021.3101979
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep multi-view clustering utilizes neural networks to extract the potential peculiarities of complementarity and consistency information among multi-view features. This can obtain a consistent representation that improves clustering performance. Although a multitude of deep multi-view clustering approaches have been proposed, most lack theoretic interpretability while maintaining the advantages of good performance. In this paper, we propose an effective differentiable network with alternating iterative optimization for multi-view co-clustering termed differentiable bi-sparse multi-view co-clustering (DBMC) and an extension named elevated DBMC (EDBMC). The proposed methods are transformed into equivalent deep networks based on the constructed objective loss functions. They have the advantages of strong interpretability of the classical machine learning methods and the superior performance of deep networks. Moreover, DBMC and EDBMC can learn a joint and consistent collaborative representation from multi-source features and guarantee sparsity between multi-view feature space and single-view sample space. Meanwhile, they can be converted into deep differentiable network frameworks with block-wise iterative training. Correspondingly, we design two three-step iterative differentiable networks to resolve resultant optimization problems with theoretically guaranteed convergence. Extensive experiments on six multi-view benchmark datasets demonstrate that the proposed frameworks outperform other state-of-the-art multi-view clustering methods.
引用
收藏
页码:4623 / 4636
页数:14
相关论文
共 45 条
[1]  
Ahmed S.M., 2020, CVPR, P10608
[2]  
Andrew G., 2013, PMLR, V28, P1247
[3]  
Bai L, 2020, AAAI CONF ARTIF INTE, V34, P3211
[4]   4D Visualization of Dynamic Events from Unconstrained Multi-View Videos [J].
Bansal, Aayush ;
Vo, Minh ;
Sheikh, Yaser ;
Ramanan, Deva ;
Narasimhan, Srinivasa .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :5365-5374
[5]   A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems [J].
Beck, Amir ;
Teboulle, Marc .
SIAM JOURNAL ON IMAGING SCIENCES, 2009, 2 (01) :183-202
[6]  
Benton A, 2019, 4TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP (REPL4NLP-2019), P1
[7]  
Chen XH, 2018, ADV NEUR IN, V31
[8]   Low-Rank Tensor Graph Learning for Multi-View Subspace Clustering [J].
Chen, Yongyong ;
Xiao, Xiaolin ;
Peng, Chong ;
Lu, Guangming ;
Zhou, Yicong .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (01) :92-104
[9]   Generalized Nonconvex Low-Rank Tensor Approximation for Multi-View Subspace Clustering [J].
Chen, Yongyong ;
Wang, Shuqin ;
Peng, Chong ;
Hua, Zhongyun ;
Zhou, Yicong .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :4022-4035
[10]   Adaptive Transition Probability Matrix Learning for Multiview Spectral Clustering [J].
Chen, Yongyong ;
Xiao, Xiaolin ;
Hua, Zhongyun ;
Zhou, Yicong .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) :4712-4726