Cross validation for the classical model of structured expert judgment

被引:62
作者
Colson, Abigail R. [1 ,2 ]
Cooke, Roger M. [2 ,3 ,4 ]
机构
[1] Ctr Dis Dynam Econ & Policy, Washington, DC USA
[2] Univ Strathclyde, Glasgow, Lanark, Scotland
[3] Resources Future Inc, Washington, DC 20036 USA
[4] TU Delft Ret, Delft, Netherlands
关键词
Expert judgment; Calibration; Information; Classical model; Out-of-sample validation; MONTSERRAT; VOLCANO;
D O I
10.1016/j.ress.2017.02.003
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
We update the 2008 TU Delft structured expert judgment database with data from 33 professionally contracted Classical Model studies conducted between 2006 and March 2015 to evaluate its performance relative to other expert aggregation models. We briefly review alternative mathematical aggregation schemes, including harmonic weighting, before focusing on linear pooling of expert judgments with equal weights and performance-based weights. Performance weighting outperforms equal weighting in all but 1 of the 33 studies in-sample. True out-of-sample validation is rarely possible for Classical Model studies, and cross validation techniques that split calibration questions into a training and test set are used instead. Performance weighting incurs an "out-of-sample penalty" and its statistical accuracy out-of-sample is lower than that of equal weighting. However, as a function of training set size, the statistical accuracy of performance-based combinations reaches 75% of the equal weight value when the training set includes 80% of calibration variables. At this point the training set is sufficiently powerful to resolve differences in individual expert performance. The information of performance-based combinations is double that of equal weighting when the training set is at least 50% of the set of calibration variables. Previous out-of-sample validation work used a Total Out-of-Sample Validity Index based on all splits of the calibration questions into training and test subsets, which is expensive to compute and includes small training sets of dubious value. As an alternative, we propose an Out-of-Sample Validity Index based on averaging the product of statistical accuracy and information over all training sets sized at 80% of the calibration set. Performance weighting outperforms equal weighting on this Out-of-Sample Validity Index in 26 of the 33 post-2006 studies; the probability of 26 or more successes on 33 trials if there were no difference between performance weighting and equal weighting is 0.001.
引用
收藏
页码:109 / 120
页数:12
相关论文
共 31 条
[1]  
[Anonymous], 2012, J MODEL MANAG
[2]   A route to more tractable expert advice [J].
Aspinall, Willy .
NATURE, 2010, 463 (7279) :294-295
[3]  
Aspinall Willy P, 2016, PLOS ONE, V11
[4]  
Aspinall WP, 2002, GEO SOC MEM, P71
[5]   A commentary on "how to interpret expert judgment assessments of twenty-first century sea-level rise" by Hylke de Vries and Roderik SW van de Wal [J].
Bamber, J. L. ;
Aspinall, W. P. ;
Cooke, R. M. .
CLIMATIC CHANGE, 2016, 137 (3-4) :321-328
[6]   CALIBRATION AND INFORMATION IN EXPERT RESOLUTION - A CLASSICAL APPROACH [J].
COOKE, R ;
MENDEL, M ;
THIJS, W .
AUTOMATICA, 1988, 24 (01) :87-93
[7]   Response to discussants [J].
Cooke, Roger .
RELIABILITY ENGINEERING & SYSTEM SAFETY, 2008, 93 (05) :775-777
[8]   TU Delft expert judgment data base [J].
Cooke, Roger M. ;
Goossens, Louis L. H. J. .
RELIABILITY ENGINEERING & SYSTEM SAFETY, 2008, 93 (05) :657-674
[9]   Validating Expert Judgment with the Classical Model [J].
Cooke, Roger M. .
EXPERTS AND CONSENSUS IN SOCIAL SCIENCE, 2014, 50 :191-212
[10]   COMMENTARY: Messaging climate change uncertainty [J].
Cooke, Roger M. .
NATURE CLIMATE CHANGE, 2015, 5 (01) :8-10