Transformer Self-Attention Network for Forecasting Mortality Rates

被引:0
|
作者
Roshani, Amin [1 ]
Izadi, Muhyiddin [1 ]
Khaledi, Baha-Eldin [2 ]
机构
[1] Razi Univ, Dept Stat, Kermanshah, Iran
[2] Univ Northern Colorado, Dept Appl Stat & Res Methods, Greeley, CO 80636 USA
来源
关键词
Auto-Regressive Integrated Moving Average; Human Mortality Database; Long Short-Term Memory; Mean Absolute Percentage Error; Poisson-Lee-Carter Mortality Model; Recurrent Neural Network; Simple Exponential Smoothing; Time Series; EXTENSION; MODEL;
D O I
暂无
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
The transformer network is a deep learning architecture that uses self-attention mechanisms to capture the long-term dependencies of a sequential data. The Poisson-Lee-Carter model, introduced to predict mortality rate, includes the factors of age and the calendar year, which is a time-dependent component. In this paper, we use the transformer to predict the time-dependent component in the Poisson-Lee-Carter model. We use the real mortality data set of some countries to compare the mortality rate prediction performance of the transformer with that of the long short-term memory (LSTM) neural network, the classic ARIMA time series model and simple exponential smoothing method. The results show that the transformer dominates or is comparable to the LSTM, ARIMA and simple exponential smoothing method.
引用
收藏
页码:81 / 103
页数:23
相关论文
共 50 条
  • [1] Transformer Self-Attention Change Detection Network with Frozen Parameters
    Cheng, Peiyang
    Xia, Min
    Wang, Dehao
    Lin, Haifeng
    Zhao, Zikai
    APPLIED SCIENCES-BASEL, 2025, 15 (06):
  • [2] Relative molecule self-attention transformer
    Łukasz Maziarka
    Dawid Majchrowski
    Tomasz Danel
    Piotr Gaiński
    Jacek Tabor
    Igor Podolak
    Paweł Morkisz
    Stanisław Jastrzębski
    Journal of Cheminformatics, 16
  • [3] Relative molecule self-attention transformer
    Maziarka, Lukasz
    Majchrowski, Dawid
    Danel, Tomasz
    Gainski, Piotr
    Tabor, Jacek
    Podolak, Igor
    Morkisz, Pawel
    Jastrzebski, Stanislaw
    JOURNAL OF CHEMINFORMATICS, 2024, 16 (01)
  • [4] Wavelet Frequency Division Self-Attention Transformer Image Deraining Network
    Fang, Siyan
    Liu, Bin
    Computer Engineering and Applications, 2024, 60 (06) : 259 - 273
  • [5] CNN-TRANSFORMER WITH SELF-ATTENTION NETWORK FOR SOUND EVENT DETECTION
    Wakayama, Keigo
    Saito, Shoichiro
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 806 - 810
  • [6] DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting
    Huang, Siteng
    Wang, Donglin
    Wu, Xuehan
    Tang, Ao
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 2129 - 2132
  • [7] The function of the self-attention network
    Cunningham, Sheila J.
    COGNITIVE NEUROSCIENCE, 2016, 7 (1-4) : 21 - 22
  • [8] Universal Graph Transformer Self-Attention Networks
    Dai Quoc Nguyen
    Tu Dinh Nguyen
    Dinh Phung
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 193 - 196
  • [9] Sparse self-attention transformer for image inpainting
    Huang, Wenli
    Deng, Ye
    Hui, Siqi
    Wu, Yang
    Zhou, Sanping
    Wang, Jinjun
    PATTERN RECOGNITION, 2024, 145
  • [10] SST: self-attention transformer for infrared deconvolution
    Gao, Lei
    Yan, Xiaohong
    Deng, Lizhen
    Xu, Guoxia
    Zhu, Hu
    INFRARED PHYSICS & TECHNOLOGY, 2024, 140