Unsupervised Continual Learning in Streaming Environments

被引:18
作者
Ashfahani, Andri [1 ]
Pratama, Mahardhika [2 ]
机构
[1] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[2] Univ South Australia, Acad Unit STEM, Adelaide, SA 5001, Australia
关键词
Labeling; Feature extraction; Costs; Task analysis; Clustering algorithms; Training; Delays; Continual learning; data streams; evolving intelligent systems; online clustering; unsupervised learning; MULTIMODAL ANOMALY DETECTION; FAULT-DETECTION; DIAGNOSIS; AUTOENCODERS; TRANSIENT;
D O I
10.1109/TNNLS.2022.3163362
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A deep clustering network (DCN) is desired for data streams because of its aptitude in extracting natural features thus bypassing the laborious feature engineering step. While automatic construction of deep networks in streaming environments remains an open issue, it is also hindered by the expensive labeling cost of data streams rendering the increasing demand for unsupervised approaches. This article presents an unsupervised approach of DCN construction on the fly via simultaneous deep learning and clustering termed autonomous DCN (ADCN). It combines the feature extraction layer and autonomous fully connected layer in which both network width and depth are self-evolved from data streams based on the bias-variance decomposition of reconstruction loss. The self-clustering mechanism is performed in the deep embedding space of every fully connected layer, while the final output is inferred via the summation of cluster prediction score. Furthermore, a latent-based regularization is incorporated to resolve the catastrophic forgetting issue. A rigorous numerical study has shown that ADCN produces better performance compared with its counterparts while offering fully autonomous construction of ADCN structure in streaming environments in the absence of any labeled samples for model updates. To support the reproducible research initiative, codes, supplementary material, and raw results of ADCN are made available in https://github.com/andriash001/AutonomousDCN.git
引用
收藏
页码:9992 / 10003
页数:12
相关论文
共 42 条
[1]  
Aggarwal CC, 2001, LECT NOTES COMPUT SC, V1973, P420
[2]  
[Anonymous], 2012, Artificial intelligence and statistics
[3]  
[Anonymous], 2008, P 25 INT C MACH LEAR, DOI DOI 10.1145/1390156.1390294
[4]   DEVDAN: Deep evolving denoising autoencoder [J].
Ashfahani, Andri ;
Pratama, Mahardhika ;
Lughofer, Edwin ;
Ong, Yew Soon .
NEUROCOMPUTING, 2020, 390 :297-314
[5]  
Ashfahani Andri, 2019, Autonomous deep learning: continual learning approach for dynamic environments, P666
[6]  
Bifet A, 2010, J MACH LEARN RES, V11, P1601
[7]   Deep Clustering for Unsupervised Learning of Visual Features [J].
Caron, Mathilde ;
Bojanowski, Piotr ;
Joulin, Armand ;
Douze, Matthijs .
COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 :139-156
[8]  
Chan DM, 2018, INT SYM COMP ARCHIT, P330, DOI [10.1109/CAHPC.2018.8645912, 10.1109/SBAC-PAD.2018.00060]
[9]  
Clanuwat T., 2018, Deep learning for classical Japanese literature, P1, DOI DOI 10.20676/00000341
[10]   A Self-Evolving Mutually-Operative Recurrent Network-based Model for Online Tool Condition Monitoring in Delay Scenario [J].
Das, Monidipa ;
Pratama, Mahardhika ;
Tjahjowidodo, Tegoeh .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :2775-2783