Transformer Self-Attention Network for Forecasting Mortality Rates

被引:0
|
作者
Roshani, Amin [1 ]
Izadi, Muhyiddin [1 ]
Khaledi, Baha-Eldin [2 ]
机构
[1] Razi Univ, Dept Stat, Kermanshah, Iran
[2] Univ Northern Colorado, Dept Appl Stat & Res Methods, Greeley, CO 80636 USA
来源
JIRSS-JOURNAL OF THE IRANIAN STATISTICAL SOCIETY | 2022年 / 21卷 / 01期
关键词
Auto-Regressive Integrated Moving Average; Human Mortality Database; Long Short-Term Memory; Mean Absolute Percentage Error; Poisson-Lee-Carter Mortality Model; Recurrent Neural Network; Simple Exponential Smoothing; Time Series; EXTENSION; MODEL;
D O I
暂无
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
The transformer network is a deep learning architecture that uses self-attention mechanisms to capture the long-term dependencies of a sequential data. The Poisson-Lee-Carter model, introduced to predict mortality rate, includes the factors of age and the calendar year, which is a time-dependent component. In this paper, we use the transformer to predict the time-dependent component in the Poisson-Lee-Carter model. We use the real mortality data set of some countries to compare the mortality rate prediction performance of the transformer with that of the long short-term memory (LSTM) neural network, the classic ARIMA time series model and simple exponential smoothing method. The results show that the transformer dominates or is comparable to the LSTM, ARIMA and simple exponential smoothing method.
引用
收藏
页码:81 / 103
页数:23
相关论文
共 50 条
  • [31] Keyword Transformer: A Self-Attention Model for Keyword Spotting
    Berg, Axel
    O'Connor, Mark
    Cruz, Miguel Tairum
    INTERSPEECH 2021, 2021, : 4249 - 4253
  • [32] SA-JS']JSTN: Self-Attention Joint Spatiotemporal Network for Temperature Forecasting
    Shi, Lukui
    Liang, Nanying
    Xu, Xia
    Li, Tao
    Zhang, Zhou
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 9475 - 9485
  • [33] Attentional control and the self: The Self-Attention Network (SAN)
    Humphreys, Glyn W.
    Sui, Jie
    COGNITIVE NEUROSCIENCE, 2016, 7 (1-4) : 5 - 17
  • [34] LoGo Transformer: Hierarchy Lightweight Full Self-Attention Network for Corneal Endothelial Cell Segmentation
    Zhang, Yinglin
    Cai, Zichao
    Higashita, Risa
    Liu, Jiang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [35] Traffic Prediction for Optical Fronthaul Network Using Self-Attention Mechanism-Based Transformer
    Zhao, Xujun
    Wu, Yonghan
    Hao, Xue
    Zhang, Lifang
    Wang, Danshi
    Zhang, Min
    2022 ASIA COMMUNICATIONS AND PHOTONICS CONFERENCE, ACP, 2022, : 1207 - 1210
  • [36] Attention Guided CAM: Visual Explanations of Vision Transformer Guided by Self-Attention
    Leem, Saebom
    Seo, Hyunseok
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 2956 - 2964
  • [37] Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration
    Lee, Eunho
    Hwang, Youngbae
    IEEE ACCESS, 2024, 12 : 38672 - 38684
  • [38] Self-Attention Attribution: Interpreting Information Interactions Inside Transformer
    Hao, Yaru
    Dong, Li
    Wei, Furu
    Xu, Ke
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 12963 - 12971
  • [39] RSAFormer: A method of polyp segmentation with region self-attention transformer
    Yin X.
    Zeng J.
    Hou T.
    Tang C.
    Gan C.
    Jain D.K.
    García S.
    Computers in Biology and Medicine, 2024, 172
  • [40] Singularformer: Learning to Decompose Self-Attention to Linearize the Complexity of Transformer
    Wu, Yifan
    Kan, Shichao
    Zeng, Min
    Li, Min
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4433 - 4441