Assessing the practical differences between model selection methods in inferences about choice response time tasks

被引:63
作者
Evans, Nathan J. [1 ]
机构
[1] Univ Amsterdam, Dept Psychol, Amsterdam, Netherlands
关键词
Model selection; Decision-making; Response time modeling; Bayes factors; Predictive accuracy; DIFFUSION-MODEL; DECISION-MAKING; BAYES FACTORS; SPEED; DECOMPOSITION; LIKELIHOOD; ACCURACY; TUTORIAL;
D O I
10.3758/s13423-018-01563-9
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
Evidence accumulations models (EAMs) have become the dominant modeling framework within rapid decision-making, using choice response time distributions to make inferences about the underlying decision process. These models are often applied to empirical data as "measurement tools", with different theoretical accounts being contrasted within the framework of the model. Some method is then needed to decide between these competing theoretical accounts, as only assessing the models on their ability to fit trends in the empirical data ignores model flexibility, and therefore, creates a bias towards more flexible models. However, there is no objectively optimal method to select between models, with methods varying in both their computational tractability and theoretical basis. I provide a systematic comparison between nine different model selection methods using a popular EAM-the linear ballistic accumulator (LBA; Brown & Heathcote, Cognitive Psychology 57(3), 153-178 2008)-in a large-scale simulation study and the empirical data of Dutilh et al. (Psychonomic Bulletin and Review, 1-19 2018). I find that the "predictive accuracy" class of methods (i.e., the Akaike Information Criterion [AIC], the Deviance Information Criterion [DIC], and the Widely Applicable Information Criterion [WAIC]) make different inferences to the "Bayes factor" class of methods (i.e., the Bayesian Information Criterion [BIC], and Bayes factors) in many, but not all, instances, and that the simpler methods (i.e., AIC and BIC) make inferences that are highly consistent with their more complex counterparts. These findings suggest that researchers should be able to use simpler "parameter counting" methods when applying the LBA and be confident in their inferences, but that researchers need to carefully consider and justify the general class of model selection method that they use, as different classes of methods often result in different inferences.
引用
收藏
页码:1070 / 1098
页数:29
相关论文
共 85 条
[61]   ESTIMATING DIMENSION OF A MODEL [J].
SCHWARZ, G .
ANNALS OF STATISTICS, 1978, 6 (02) :461-464
[62]   A Survey of Model Evaluation Approaches With a Tutorial on Hierarchical Bayesian Methods [J].
Shiffrin, Richard M. ;
Lee, Michael D. ;
Kim, Woojae ;
Wagenmakers, Eric-Jan .
COGNITIVE SCIENCE, 2008, 32 (08) :1248-1284
[63]  
Singmann H., 2018, New methods in neuroscience and cognitive psychology
[64]  
Spektor MS., 2018, Psychonomic Bulletin Review, V4, P1, DOI [10.3758/s13423-018-1446-5, DOI 10.3758/S13423-018-1446-5]
[65]   Bayesian measures of model complexity and fit [J].
Spiegelhalter, DJ ;
Best, NG ;
Carlin, BR ;
van der Linde, A .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2002, 64 :583-616
[66]   Age-related differences in diffusion model boundary optimality with both trial-limited and time-limited tasks [J].
Starns, Jeffrey J. ;
Ratcliff, Roger .
PSYCHONOMIC BULLETIN & REVIEW, 2012, 19 (01) :139-145
[67]  
Stefan A., 2018, TUTORIAL BAYES FACTO
[68]   MODELS FOR CHOICE-REACTION TIME [J].
STONE, M .
PSYCHOMETRIKA, 1960, 25 (03) :251-260
[69]   A Markov Chain Monte Carlo version of the genetic algorithm differential evolution: easy Bayesian computing for real parameter spaces [J].
Ter Braak, Cajo J. F. .
STATISTICS AND COMPUTING, 2006, 16 (03) :239-249
[70]   An evidence accumulation model of acoustic cue weighting in vowel perception [J].
Tillman, Gabriel ;
Benders, Titia ;
Brown, Scott D. ;
van Ravenzwaaij, Don .
JOURNAL OF PHONETICS, 2017, 61 :1-12