OnsitNet: A memory-capable online time series forecasting model incorporating a self-attention mechanism

被引:1
|
作者
Liu, Hui [1 ,2 ,3 ]
Wang, Zhengkai [1 ]
Dong, Xiyao [1 ]
Du, Junzhao [1 ,2 ,3 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710126, Peoples R China
[2] Minist Educ, Engn Res Ctr Blockchain Technol Applicat & Evaluat, Xian 710126, Peoples R China
[3] Key Lab Smart Human Comp Interact & Wearable Techn, Xian 710126, Peoples R China
基金
中国国家自然科学基金;
关键词
Time series forecasting; Online learning; Offline learning; Self-attention mechanism; ITransformer;
D O I
10.1016/j.eswa.2024.125231
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traditional time series (TS) forecasting models are based on fixed, static datasets and lack scalability when faced with the continuous influx of data in real-world scenarios. Real-time online learning of data streams is crucial for improving forecasting efficiency. However, few studies focus on online TS forecasting, and existing approaches have several limitations. Most current online TS forecasting models merely train on data streams and are ineffective in handling concept drift scenarios. Furthermore, they often fail to adequately consider dependencies between variables and do not leverage the robust modeling capabilities of offline models. Therefore, we propose an innovative online learning method called OnsitNet. It consists of multiple learning modules that progressively expand the receptive field of convolutional kernels within the learning modules using an exponentially growing dilation factor, aiding in the capture of multi-scale data features. Within the learning modules, we propose an online learning strategy focusing on memorizing concept drift scenarios, with a fast learner, memorizer, and Pearson trigger. The Pearson trigger activates dynamic interaction between the fast learner and memorizer by detecting new data patterns, facilitating online rapid learning of data streams. To capture the dependencies between variables, we propose a new model, SITransformer, which is a streamlined version of the offline model ITransformer. Unlike the traditional Transformer, it reverses the roles of the feed-forward network and the attention mechanism. This inverted architecture is more effective at learning the correlations between variables. Experimental results on five real-world datasets show OnsitNet achieves lower online prediction errors, enabling timely and effective forecasting of future trends in TS data.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Evaluating the effectiveness of self-attention mechanism in tuberculosis time series forecasting
    Lv, Zhihong
    Sun, Rui
    Liu, Xin
    Wang, Shuo
    Guo, Xiaowei
    Lv, Yuan
    Yao, Min
    Zhou, Junhua
    BMC INFECTIOUS DISEASES, 2024, 24 (01)
  • [2] A Comparative Evaluation of Self-Attention Mechanism with ConvLSTM Model for Global Aerosol Time Series Forecasting
    Radivojevic, Dusan S.
    Lazovic, Ivan M. M.
    Mirkov, Nikola S. S.
    Ramadani, Uzahir R. R.
    Nikezic, Dusan P.
    MATHEMATICS, 2023, 11 (07)
  • [3] Bridging Self-Attention and Time Series Decomposition for Periodic Forecasting
    Jiang, Song
    Syed, Tahin
    Zhu, Xuan
    Levy, Joshua
    Aronchik, Boris
    Sun, Yizhou
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 3202 - 3211
  • [4] DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting
    Huang, Siteng
    Wang, Donglin
    Wu, Xuehan
    Tang, Ao
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 2129 - 2132
  • [5] Time-Series Forecasting Through Contrastive Learning with a Two-Dimensional Self-attention Mechanism
    Jiang, Linling
    Zhang, Fan
    Zhang, Mingli
    Zhang, Caiming
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT II, 2024, 14448 : 147 - 165
  • [6] Image Deblurring Algorithm Incorporating Self-Attention Mechanism
    Yu, Tingting
    Lv, Qiang
    Huang, Zhen
    Su, Zhang
    Wang, Xiangli
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2025,
  • [7] Multiscale echo self-attention memory network for multivariate time series classification
    Lyu, Huizi
    Huang, Desen
    Li, Sen
    Ma, Qianli
    Ng, Wing W. Y.
    NEUROCOMPUTING, 2023, 520 : 60 - 72
  • [8] Time Series Self-Attention Approach for Human Motion Forecasting: A Baseline 2D Pose Forecasting
    Yunus, Andi Prademon
    Morita, Kento
    Shirai, Nobu C.
    Wakabayashi, Tetsushi
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2023, 27 (03) : 445 - 457
  • [9] Hybrid LSTM Self-Attention Mechanism Model for Forecasting the Reform of Scientific Research in Morocco
    Fahim, Asmaa
    Tan, Qingmei
    Mazzi, Mouna
    Sahabuddin, Md
    Naz, Bushra
    Ullah Bazai, Sibghat
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2021, 2021
  • [10] Rethink the Top-u Attention in Sparse Self-attention for Long Sequence Time-Series Forecasting
    Meng, Xiangxu
    Li, Wei
    Gaber, Tarek
    Zhao, Zheng
    Chen, Chuhao
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI, 2023, 14259 : 256 - 267