Exploring unsupervised pre-training for echo state networks

被引:1
|
作者
Steiner, Peter [1 ]
Jalalvand, Azarakhsh [2 ]
Birkholz, Peter [1 ]
机构
[1] Tech Univ Dresden, Inst Acoust & Speech Commun, D-01069 Dresden, Saxony, Germany
[2] Princeton Univ, Mech & Aerosp Engn Dept, Princeton, NJ 08544 USA
关键词
ESN; RCN; Clustering; State machine; REPRESENTATION; RESERVOIRS; DESIGN; LSTM;
D O I
10.1007/s00521-023-08988-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Echo State Networks (ESNs) are a special type of Recurrent Neural Networks (RNNs), in which the input and recurrent connections are traditionally generated randomly, and only the output weights are trained. However, recent publications have addressed the problem that a purely random initialization may not be ideal. Instead, a completely deterministic or data-driven initialized ESN structure was proposed. In this work, an unsupervised training methodology for the hidden components of an ESN is proposed. Motivated by traditional Hidden Markov Models (HMMs), which have been widely used for speech recognition for decades, we present an unsupervised pre-training method for the recurrent weights and bias weights of ESNs. This approach allows for using unlabeled data during the training procedure and shows superior results for continuous spoken phoneme recognition, as well as for a large variety of time-series classification datasets.
引用
收藏
页码:24225 / 24242
页数:18
相关论文
共 50 条
  • [1] Exploring unsupervised pre-training for echo state networks
    Peter Steiner
    Azarakhsh Jalalvand
    Peter Birkholz
    Neural Computing and Applications, 2023, 35 : 24225 - 24242
  • [2] Ensemble and Pre-Training Approach for Echo State Network and Extreme Learning Machine Models
    Tang, Lingyu
    Wang, Jun
    Wang, Mengyao
    Zhao, Chunyu
    ENTROPY, 2024, 26 (03)
  • [3] UNSUPERVISED POINT CLOUD PRE-TRAINING VIA CONTRASTING AND CLUSTERING
    Mei, Guofeng
    Huang, Xiaoshui
    Liu, Juan
    Zhang, Jian
    Wu, Qiang
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 66 - 70
  • [4] Unsupervised Pretraining of Echo State Networks for Onset Detection
    Steiner, Peter
    Jalalvand, Azarakhsh
    Birkholz, Peter
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2021, PT V, 2021, 12895 : 59 - 70
  • [5] Multi-Scale Convolutional Echo State Network With an Effective Pre-Training Strategy for Solar Irradiance Forecasting
    Yang, Dayong
    Li, Tao
    Guo, Zhijun
    Li, Qian
    IEEE ACCESS, 2024, 12 : 13442 - 13452
  • [6] SMILES-BERT: Large Scale Unsupervised Pre-Training for Molecular Property Prediction
    Wang, Sheng
    Guo, Yuzhi
    Wang, Yuhong
    Sun, Hongmao
    Huang, Junzhou
    ACM-BCB'19: PROCEEDINGS OF THE 10TH ACM INTERNATIONAL CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY AND HEALTH INFORMATICS, 2019, : 429 - 436
  • [7] Incorporating pre-training in long short-term memory networks for tweet classification
    Yuan, Shuhan
    Wu, Xintao
    Xiang, Yang
    SOCIAL NETWORK ANALYSIS AND MINING, 2018, 8 (01)
  • [8] Incorporating Pre-Training in Long Short-Term Memory Networks for Tweets Classification
    Yuan, Shuhan
    Wu, Xintao
    Xiang, Yang
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 1329 - 1334
  • [9] Cluster-Based Input Weight Initialization for Echo State Networks
    Steiner, Peter
    Jalalvand, Azarakhsh
    Birkholz, Peter
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (10) : 7648 - 7659
  • [10] Clone Detection with Pre-training Enhanced Code Representation
    Leng L.-S.
    Liu S.
    Tian C.-L.
    Dou S.-J.
    Wang Z.
    Zhang M.-S.
    Ruan Jian Xue Bao/Journal of Software, 2022, 33 (05): : 1758 - 1773