Flexible transition timing in discrete-time multistate life tables using Markov chains with rewards

被引:4
|
作者
Schneider, Daniel C. [1 ,3 ]
Myrskylae, Mikko [1 ,2 ]
van Raalte, Alyson [1 ]
机构
[1] Max Planck Inst Demog Res, Rostock, Germany
[2] Univ Helsinki, Helsinki, Finland
[3] Max Planck Inst Demog Res, Konrad Zuse Str 1, D-18057 Rostock, Germany
来源
POPULATION STUDIES-A JOURNAL OF DEMOGRAPHY | 2024年 / 78卷 / 03期
基金
欧洲研究理事会;
关键词
life tables; multistate models; Markov chains; working life expectancy; discrete-time event history analysis; Human Mortality Database (HMD); Survey of Health; Ageing and Retirement in Europe (SHARE); WORKING LIFE; EXPECTANCY; STOCHASTICITY; RETIREMENT; MODELS; HEALTH; TRENDS; AGE;
D O I
10.1080/00324728.2023.2176535
中图分类号
C921 [人口统计学];
学科分类号
摘要
Discrete-time multistate life tables are attractive because they are easier to understand and apply in comparison with their continuous-time counterparts. While such models are based on a discrete time grid, it is often useful to calculate derived magnitudes (e.g. state occupation times), under assumptions which posit that transitions take place at other times, such as mid-period. Unfortunately, currently available models allow very few choices about transition timing. We propose the use of Markov chains with rewards as a general way of incorporating information on the timing of transitions into the model. We illustrate the usefulness of rewards-based multistate life tables by estimating working life expectancies using different retirement transition timings. We also demonstrate that for the single-state case, the rewards approach matches traditional life-table methods exactly. Finally, we provide code to replicate all results from the paper plus R and Stata packages for general use of the method proposed.
引用
收藏
页码:413 / 427
页数:15
相关论文
共 17 条
  • [1] On optimal importance sampling for discrete-time Markov chains
    Sandmann, W
    SECOND INTERNATIONAL CONFERENCE ON THE QUANTITATIVE EVALUATION OF SYSTEMS, PROCEEDINGS, 2005, : 230 - 239
  • [2] Augmented truncation approximations of discrete-time Markov chains
    Liu, Yuanyuan
    OPERATIONS RESEARCH LETTERS, 2010, 38 (03) : 218 - 222
  • [3] DISCRETE-TIME INVERSION AND DERIVATIVE ESTIMATION FOR MARKOV-CHAINS
    GLASSERMAN, P
    OPERATIONS RESEARCH LETTERS, 1990, 9 (05) : 305 - 313
  • [4] Deterioration of Flexible Pavements Induced by Flooding: Case Study Using Stochastic Monte Carlo Simulations in Discrete-Time Markov Chains
    Valles-Valles, David
    Torres-Machi, Cristina
    JOURNAL OF INFRASTRUCTURE SYSTEMS, 2023, 29 (01)
  • [5] Process Mining IPTV Customer Eye Gaze Movement Using Discrete-Time Markov Chains
    Chen, Zhi
    Zhang, Shuai
    McClean, Sally
    Hart, Fionnuala
    Milliken, Michael
    Allan, Brahim
    Kegel, Ian
    ALGORITHMS, 2023, 16 (02)
  • [6] Using Discrete-Time Multistate Models to Analyze Students' University Pathways
    Sulis, Isabella
    Giambona, Francesca
    Tedesco, Nicola
    ADVANCES IN STATISTICAL MODELS FOR DATA ANALYSIS, 2015, : 259 - 268
  • [7] Log-Sobolev and Nash Inequalities for Discrete-Time Finite Markov Chains
    Song, Yan-Hong
    MARKOV PROCESSES AND RELATED FIELDS, 2015, 21 (01) : 127 - 144
  • [8] On the use of Cauchy integral formula for the embedding problem of discrete-time Markov chains
    Ekhosuehi, Virtue U.
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2023, 52 (04) : 973 - 987
  • [9] First passage in discrete-time absorbing Markov chains under stochastic resetting
    Chen, Hanshuang
    Li, Guofeng
    Huang, Feng
    JOURNAL OF PHYSICS A-MATHEMATICAL AND THEORETICAL, 2022, 55 (38)
  • [10] Computing the Nash Bargaining Solution for Multiple Players in Discrete-Time Markov Chains Games
    Trejo, Kristal K.
    Clempner, Julio B.
    Poznyak, Alexander S.
    CYBERNETICS AND SYSTEMS, 2020, 51 (01) : 1 - 26