OnsitNet: A memory-capable online time series forecasting model incorporating a self-attention mechanism

被引:1
作者
Liu, Hui [1 ,2 ,3 ]
Wang, Zhengkai [1 ]
Dong, Xiyao [1 ]
Du, Junzhao [1 ,2 ,3 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710126, Peoples R China
[2] Minist Educ, Engn Res Ctr Blockchain Technol Applicat & Evaluat, Xian 710126, Peoples R China
[3] Key Lab Smart Human Comp Interact & Wearable Techn, Xian 710126, Peoples R China
基金
中国国家自然科学基金;
关键词
Time series forecasting; Online learning; Offline learning; Self-attention mechanism; ITransformer;
D O I
10.1016/j.eswa.2024.125231
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traditional time series (TS) forecasting models are based on fixed, static datasets and lack scalability when faced with the continuous influx of data in real-world scenarios. Real-time online learning of data streams is crucial for improving forecasting efficiency. However, few studies focus on online TS forecasting, and existing approaches have several limitations. Most current online TS forecasting models merely train on data streams and are ineffective in handling concept drift scenarios. Furthermore, they often fail to adequately consider dependencies between variables and do not leverage the robust modeling capabilities of offline models. Therefore, we propose an innovative online learning method called OnsitNet. It consists of multiple learning modules that progressively expand the receptive field of convolutional kernels within the learning modules using an exponentially growing dilation factor, aiding in the capture of multi-scale data features. Within the learning modules, we propose an online learning strategy focusing on memorizing concept drift scenarios, with a fast learner, memorizer, and Pearson trigger. The Pearson trigger activates dynamic interaction between the fast learner and memorizer by detecting new data patterns, facilitating online rapid learning of data streams. To capture the dependencies between variables, we propose a new model, SITransformer, which is a streamlined version of the offline model ITransformer. Unlike the traditional Transformer, it reverses the roles of the feed-forward network and the attention mechanism. This inverted architecture is more effective at learning the correlations between variables. Experimental results on five real-world datasets show OnsitNet achieves lower online prediction errors, enabling timely and effective forecasting of future trends in TS data.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] ALAE: self-attention reconstruction network for multivariate time series anomaly identification
    Jiang, Kai
    Liu, Hui
    Ruan, Huaijun
    Zhao, Jia
    Lin, Yuxiu
    SOFT COMPUTING, 2023, 27 (15) : 10509 - 10519
  • [22] Attention Based Mechanism for Load Time Series Forecasting: AN-LSTM
    Bedi, Jatin
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT I, 2020, 12396 : 838 - 849
  • [23] FECAM: Frequency enhanced channel attention mechanism for time series forecasting
    Jiang, Maowei
    Zeng, Pengyu
    Wang, Kai
    Liu, Huan
    Chen, Wenbo
    Liu, Haoran
    ADVANCED ENGINEERING INFORMATICS, 2023, 58
  • [24] An abstractive text summarization technique using transformer model with self-attention mechanism
    Kumar, Sandeep
    Solanki, Arun
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (25) : 18603 - 18622
  • [25] Short-Term Load Forecasting Based on VMD and Deep TCN-Based Hybrid Model with Self-Attention Mechanism
    Xiong, Qingliang
    Liu, Mingping
    Li, Yuqin
    Zheng, Chaodan
    Deng, Suhui
    APPLIED SCIENCES-BASEL, 2023, 13 (22):
  • [26] Infectious disease time series modelling using transformer self-attention based network
    Prakash, Satya
    Jalal, Anand Singh
    Pathak, Pooja
    ENGINEERING RESEARCH EXPRESS, 2025, 7 (01):
  • [27] An abstractive text summarization technique using transformer model with self-attention mechanism
    Sandeep Kumar
    Arun Solanki
    Neural Computing and Applications, 2023, 35 : 18603 - 18622
  • [28] Hybrid semantics-based vulnerability detection incorporating a Temporal Convolutional Network and Self-attention Mechanism
    Chen, Jinfu
    Wang, Weijia
    Liu, Bo
    Cai, Saihua
    Towey, Dave
    Wang, Shengran
    INFORMATION AND SOFTWARE TECHNOLOGY, 2024, 171
  • [29] A Group Resident Daily Load Forecasting Method Fusing Self-Attention Mechanism Based on Load Clustering
    Cao, Jie
    Zhang, Ru-Xuan
    Liu, Chao-Qiang
    Yang, Yuan-Bo
    Chen, Chin-Ling
    APPLIED SCIENCES-BASEL, 2023, 13 (02):
  • [30] Diversified real-time user interest recommendation based on self-attention mechanism
    Hu C.
    Chen C.
    Chen T.
    Miao H.
    Chen W.
    Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice, 2023, 43 (09): : 2579 - 2594