Robust multilayer bootstrap networks in ensemble for unsupervised representation learning and clustering

被引:0
|
作者
Zhang, Xiao-Lei [1 ,2 ,3 ]
Li, Xuelong [2 ]
机构
[1] Northwestern Polytech Univ, Sch Marine Sci & Technol, Xian 710072, Shaanxi, Peoples R China
[2] China Telecom, Inst Artificial Intelligence TeleAI, Beijing 710072, Peoples R China
[3] Northwestern Polytech Univ, Res & Dev Inst, Shenzhen, Peoples R China
基金
美国国家科学基金会;
关键词
Ensemble selection; Cluster ensemble; Multilayer bootstrap networks; Unsupervised learning;
D O I
10.1016/j.patcog.2024.110739
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is known that unsupervised nonlinear learning is sensitive to the selection of hyperparameters, which hinders its practical use. How to determine the optimal hyperparameter setting that may be dramatically different across applications is a hard issue. In this paper, we aim to address this issue for multilayer bootstrap networks (MBN), a recent unsupervised model, in a way as simple as possible. Specifically, we first propose an MBN ensemble (MBN-E) algorithm which concatenates the sparse outputs of a set of MBN base models with different network structures into a new representation. Then, we take the new representation produced by MBN-E as a reference for selecting the optimal MBN base models. Moreover, we propose a fast version of MBN-E (fMBN-E), which is not only theoretically even faster than a single standard MBN but also does not increase the estimation error of MBN-E. Empirically, comparing to a number of advanced clustering methods, the proposed methods reach reasonable performance in their default settings. fMBN-E is empirically hundreds of times faster than MBN-E without suffering performance degradation. The applications to image segmentation and graph data mining further demonstrate the advantage of the proposed methods.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Robust Signal Classification Using Unsupervised Learning
    Clancy, T. Charles
    Khawar, Awais
    Newman, Timothy R.
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2011, 10 (04) : 1289 - 1299
  • [42] Unsupervised Representation Learning with Task-Agnostic Feature Masking for Robust End-to-End Speech Recognition
    Kim, June-Woo
    Chung, Hoon
    Jung, Ho-Young
    MATHEMATICS, 2023, 11 (03)
  • [43] Unsupervised deep clustering via contractive feature representation and focal loss
    Cai, Jinyu
    Wang, Shiping
    Xu, Chaoyang
    Guo, Wenzhong
    PATTERN RECOGNITION, 2022, 123
  • [44] Unsupervised learning for a clustering algorithm based on ellipsoidal calculus
    Guarneros, Alejandro
    Salgado, Ivan
    Chairez, Isaac
    2020 7TH INTERNATIONAL CONFERENCE ON CONTROL, DECISION AND INFORMATION TECHNOLOGIES (CODIT'20), VOL 1, 2020, : 124 - 129
  • [45] Unsupervised learning of Bayesian networks via estimation of distribution algorithms:: An application to gene expression data clustering
    Peña, JM
    Lozano, JA
    Larrañaga, P
    INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS, 2004, 12 : 63 - 82
  • [46] Blanket Clusterer: A Tool for Automating the Clustering in Unsupervised Learning
    Bogdanoski, Konstantin
    Mishev, Kostadin
    Trajanov, Dimitar
    DELTA: PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON DEEP LEARNING THEORY AND APPLICATIONS, 2022, : 125 - 131
  • [47] TRUNC: A Transfer Learning Unsupervised Network for Data Clustering
    Xavier, Rita
    Peller, John
    de Castro, Leandro Nunes
    IEEE ACCESS, 2025, 13 : 46282 - 46298
  • [48] Unsupervised Contrastive Learning for Time Series Data Clustering
    Cao, Bo
    Xing, Qinghua
    Yang, Ke
    Wu, Xuan
    Li, Longyue
    ELECTRONICS, 2025, 14 (08):
  • [49] Unsupervised representation learning by discovering reliable image relations
    Milbich, Timo
    Ghori, Omair
    Diego, Ferran
    Ommer, Bjoern
    PATTERN RECOGNITION, 2020, 102
  • [50] Unsupervised Speech Representation Learning Using WaveNet Autoencoders
    Chorowski, Jan
    Weiss, Ron J.
    Bengio, Samy
    van den Oord, Aaron
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (12) : 2041 - 2053