Adapting Random Simple Recurrent Network for Online Forecasting Problems

被引:1
作者
Khennour, Mohammed Elmahdi [1 ]
Bouchachia, Abdelhamid [2 ]
Kherfi, Mohammed Lamine [1 ,3 ]
Bouanane, Khadra [1 ]
Aiadi, Oussama [1 ]
机构
[1] Kasdi Merbah Univ, Lab Artificial Intelligence & Informat Technol, Ouargla, Algeria
[2] Bournemouth Univ, Dept Comp & Informat, Poole, Dorset, England
[3] Univ Quebec Trois Rivieres, LAMIA Lab, Trois Rivieres, PQ, Canada
来源
IEEE CONFERENCE ON EVOLVING AND ADAPTIVE INTELLIGENT SYSTEMS 2024, IEEE EAIS 2024 | 2024年
关键词
Random Simple Recurrent Network; Online Learning; Forecasting problems; Projected Online Gradient Descent; Follow-The-Proximally-Regularized-Leader; RANDOM NEURAL-NETWORKS;
D O I
10.1109/EAIS58494.2024.10570020
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Random Simple Recurrent Network (RSRN) is a forecasting model based on the Random Neural Network (RaNN) and Recurrent Neural Network (RNN). RSRN has demonstrated energy-efficient and effective forecasting capabilities in offline mode, making it suitable for various applications. However, offline training faces challenges, such as limited storage capacity, computational power, and evolving datasets. To address these limitations, this paper introduces an online learning approach to the RSRN model. We present adaptations of two online learning algorithms, Projected Online Gradient Descent (POGD) and Follow-The-Proximally-Regularized-Leader (FTRL-Proximal), for training RSRN in real-time. POGD leverages Back Propagation Through Time (BPTT) for handling dependencies with a sliding window, while FTRL-Proximal offers a balance between adaptability and stability, especially for sparse data. Our approach is the first to introduce RSRN's forecasting capabilities in a dynamic environment, demonstrating its potential in real-world applications where data availability is not guaranteed. The effectiveness of the online RSRN with both approaches is demonstrated through experimental results on benchmark datasets, showcasing competitive performance that surpasses offline mode computation and result.
引用
收藏
页码:134 / 140
页数:7
相关论文
共 38 条
[21]  
McMahan HB, 2013, 19TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'13), P1222
[22]  
Nakip M., 2023, 2023 31 INT S MOD AN, P1
[23]  
Nakip M, 2024, Arxiv, DOI arXiv:2306.13030
[24]   A denial of service detector based on maximum likelihood detection and the random neural network [J].
Oeke, Guelay ;
Loukas, Georgios .
COMPUTER JOURNAL, 2007, 50 (06) :717-727
[25]   A modified Elman neural network with a new learning rate scheme [J].
Ren, Guanghua ;
Cao, Yuting ;
Wen, Shiping ;
Huang, Tingwen ;
Zeng, Zhigang .
NEUROCOMPUTING, 2018, 286 :11-18
[26]   The Deep Learning Generative Adversarial Random Neural Network in data marketplaces: The digital creative [J].
Serrano, Will .
NEURAL NETWORKS, 2023, 165 :420-434
[27]   Epileptic Seizure Classification Based on Random Neural Networks Using Discrete Wavelet Transform for Electroencephalogram Signal Decomposition [J].
Shah, Syed Yaseen ;
Larijani, Hadi ;
Gibson, Ryan M. ;
Liarokapis, Dimitrios .
APPLIED SCIENCES-BASEL, 2024, 14 (02)
[28]  
Singer Y., 2009, ADV NEURAL INFORM PR, V22
[29]  
Sotirov S., 2012, P 13 INT WORKSH GEN, P49
[30]   The Random Neural Network: A Survey [J].
Timotheou, Stelios .
COMPUTER JOURNAL, 2010, 53 (03) :251-267