Uncertainty quantification for epidemiological forecasts of COVID-19 through combinations of model predictions

被引:6
作者
Silk, D. S. [1 ]
Bowman, V. E. [1 ]
Semochkina, D. [2 ]
Dalrymple, U. [1 ]
Woods, D. C. [2 ]
机构
[1] Def Sci & Technol Lab, Porton Down, Salisbury, Wilts, England
[2] Univ Southampton, Stat Sci Res Inst, Salisbury, Wilts, England
关键词
COVID-19; uncertainty quantification; model combination; disease forecasting; model stacking; CALIBRATION;
D O I
10.1177/09622802221109523
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Scientific advice to the UK government throughout the COVID-19 pandemic has been informed by ensembles of epidemiological models provided by members of the Scientific Pandemic Influenza group on Modelling. Among other applications, the model ensembles have been used to forecast daily incidence, deaths and hospitalizations. The models differ in approach (e.g. deterministic or agent-based) and in assumptions made about the disease and population. These differences capture genuine uncertainty in the understanding of disease dynamics and in the choice of simplifying assumptions underpinning the model. Although analyses of multi-model ensembles can be logistically challenging when time-frames are short, accounting for structural uncertainty can improve accuracy and reduce the risk of over-confidence in predictions. In this study, we compare the performance of various ensemble methods to combine short-term (14-day) COVID-19 forecasts within the context of the pandemic response. We address practical issues around the availability of model predictions and make some initial proposals to address the shortcomings of standard methods in this challenging situation.
引用
收藏
页码:1778 / 1789
页数:12
相关论文
共 24 条
[1]  
Burnham K. P., 2002, Model selection and multimodel inference: A practical informationtheoretic approach
[2]  
Funk S., 2021, STAT METHODS MED RES
[3]   Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation [J].
Gneiting, T ;
Raftery, AE ;
Westveld, AH ;
Goldman, T .
MONTHLY WEATHER REVIEW, 2005, 133 (05) :1098-1118
[4]   Strictly proper scoring rules, prediction, and estimation [J].
Gneiting, Tilmann ;
Raftery, Adrian E. .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2007, 102 (477) :359-378
[5]   Probabilistic forecasts, calibration and sharpness [J].
Gneiting, Tilmann ;
Balabdaoui, Fadoua ;
Raftery, Adrian E. .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2007, 69 :243-268
[6]   Comparing Density Forecasts Using Threshold- and Quantile-Weighted Scoring Rules [J].
Gneiting, Tilmann ;
Ranjan, Roopesh .
JOURNAL OF BUSINESS & ECONOMIC STATISTICS, 2011, 29 (03) :411-422
[7]  
*GOV UK, 2020, COR COVID 19 GUID ED
[8]   The hydrologist's guide to Bayesian model selection, averaging and combination [J].
Hoege, M. ;
Guthke, A. ;
Nowak, W. .
JOURNAL OF HYDROLOGY, 2019, 572 :96-107
[9]   Bayesian model averaging: A tutorial [J].
Hoeting, JA ;
Madigan, D ;
Raftery, AE ;
Volinsky, CT .
STATISTICAL SCIENCE, 1999, 14 (04) :382-401
[10]  
Kennedy J, 1995, 1995 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS PROCEEDINGS, VOLS 1-6, P1942, DOI 10.1109/icnn.1995.488968