Semisupervised Progressive Representation Learning for Deep Multiview Clustering

被引:10
作者
Chen, Rui [1 ,2 ]
Tang, Yongqiang [2 ]
Xie, Yuan [3 ]
Feng, Wenlong [1 ,4 ]
Zhang, Wensheng [1 ,2 ]
机构
[1] Hainan Univ, Coll Informat Sci & Technol, Haikou 570208, Peoples R China
[2] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
[3] East China Normal Univ, Sch Comp Sci & Technol, Shanghai 200241, Peoples R China
[4] Hainan Univ, State Key Lab Marine Resource Utilizat South China, Haikou 570208, Peoples R China
基金
上海市自然科学基金; 中国国家自然科学基金;
关键词
Representation learning; Training; Data models; Task analysis; Complexity theory; Semisupervised learning; Optimization; Deep clustering; multiview clustering; progressive sample learning; semisupervised learning; SELF-REPRESENTATION; IMAGE FEATURES; SCALE;
D O I
10.1109/TNNLS.2023.3278379
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multiview clustering has become a research hotspot in recent years due to its excellent capability of heterogeneous data fusion. Although a great deal of related works has appeared one after another, most of them generally overlook the potentials of prior knowledge utilization and progressive sample learning, resulting in unsatisfactory clustering performance in real-world applications. To deal with the aforementioned drawbacks, in this article, we propose a semisupervised progressive representation learning approach for deep multiview clustering (namely, SPDMC). Specifically, to make full use of the discriminative information contained in prior knowledge, we design a flexible and unified regularization, which models the sample pairwise relationship by enforcing the learned view-specific representation of must-link (ML) samples (cannot-link (CL) samples) to be similar (dissimilar) with cosine similarity. Moreover, we introduce the self-paced learning (SPL) paradigm and take good care of two characteristics in terms of both complexity and diversity when progressively learning multiview representations, such that the complementarity across multiple views can be squeezed thoroughly. Through comprehensive experiments on eight widely used image datasets, we prove that the proposed approach can perform better than the state-of-the-art opponents.
引用
收藏
页码:14341 / 14355
页数:15
相关论文
共 81 条
  • [11] Deep Self-Evolution Clustering
    Chang, Jianlong
    Meng, Gaofeng
    Wang, Lingfeng
    Xiang, Shiming
    Pan, Chunhong
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (04) : 809 - 823
  • [12] A Semisupervised Recurrent Convolutional Attention Model for Human Activity Recognition
    Chen, Kaixuan
    Yao, Lina
    Zhang, Dalin
    Wang, Xianzhi
    Chang, Xiaojun
    Nie, Feiping
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (05) : 1747 - 1756
  • [13] Deep multi-view semi-supervised clustering with sample pairwise constraints
    Chen, Rui
    Tang, Yongqiang
    Zhang, Wensheng
    Feng, Wenlong
    [J]. NEUROCOMPUTING, 2022, 500 : 832 - 845
  • [14] Deep convolutional self-paced clustering
    Chen, Rui
    Tang, Yongqiang
    Tian, Lei
    Zhang, Caixia
    Zhang, Wensheng
    [J]. APPLIED INTELLIGENCE, 2022, 52 (05) : 4858 - 4872
  • [15] [陈锐 Chen Rui], 2021, [模式识别与人工智能, Pattern Recognition and Artificial Intelligence], V34, P14
  • [16] Histograms of oriented gradients for human detection
    Dalal, N
    Triggs, B
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 886 - 893
  • [17] Ding C, 2005, LECT NOTES ARTIF INT, V3720, P530, DOI 10.1007/11564096_51
  • [18] Balanced Self-Paced Learning for Generative Adversarial Clustering Network
    Dizaji, Kamran Ghasedi
    Wang, Xiaoqian
    Deng, Cheng
    Huang, Heng
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4386 - 4395
  • [19] Ester Martin., 1996, P 2 INT C KNOWL DISC, V96, P226, DOI DOI 10.5555/3001460.3001507
  • [20] Glorot X, 2010, P 13 INT C ART INT S, P249, DOI DOI 10.1109/LGRS.2016.2565705