Biased Belief Updating in Causal Reasoning About COVID-19

被引:1
作者
Gugerty, Leo [1 ]
Shreeves, Michael [2 ]
Dumessa, Nathan [1 ]
机构
[1] Clemson Univ, Dept Psychol, 418 Brackett Hall, Clemson, SC 29634 USA
[2] Arizona State Univ, Dept Psychol, Tempe, AZ USA
关键词
belief bias; causal strength; causal reasoning; rational models; COVID-19; STRENGTH; COVARIATION; INFORMATION; STATISTICS; VACCINE; TRUST; SIZE;
D O I
10.1037/xap0000383
中图分类号
B849 [应用心理学];
学科分类号
040203 ;
摘要
In three experiments using 977 participants, we investigated whether people would show belief bias by letting their prior beliefs on politically charged topics unduly influence their reasoning when updating beliefs based on evidence. Participants saw data from fictional studies and made judgments of how strongly COVID-19 mitigation measures influenced the number of COVID-19 cases (political problems) or a medicine influenced number of headaches (neutral problems). Based on rational Bayesian models using strong versus weak priors to represent biased beliefs about causal strength, we predicted that people who strongly supported the use of mitigation measures (mainly liberals) would overestimate causal strength on political problems relative to neutral problems while those who strongly opposed mitigation measures (mainly conservatives) would underestimate strength on political problems. Results suggested that belief bias is driven more by specific beliefs relevant to the reasoning context than by general attitudinal factors like political ideology. In Experiments 1 and 2, liberals and conservatives who strongly supported mitigation measures overestimated strength on political problems. In Experiment 3, conservatives who strongly opposed the use of mitigation measures underestimated causal strength on political problems and conservatives who supported mitigation measures made higher strength judgments on political problems than those who opposed these measures.
引用
收藏
页码:695 / 721
页数:27
相关论文
共 54 条
[1]  
Ajzen I, 2005, HANDBOOK OF ATTITUDES, P173
[2]   THE ADAPTIVE NATURE OF HUMAN CATEGORIZATION [J].
ANDERSON, JR .
PSYCHOLOGICAL REVIEW, 1991, 98 (03) :409-429
[3]   Recommended effect size statistics for repeated measures designs [J].
Bakeman, R .
BEHAVIOR RESEARCH METHODS, 2005, 37 (03) :379-384
[4]   Comparative risk science for the coronavirus pandemic [J].
Bostrom, Ann ;
Bohm, Gisela ;
O'Connor, Robert E. ;
Hanss, Daniel ;
Bodi-Fernandez, Otto ;
Halder, Pradipta .
JOURNAL OF RISK RESEARCH, 2020, 23 (7-8) :902-911
[5]   Formalizing Neurath's Ship: Approximate Algorithms for Online Causal Learning [J].
Bramley, Neil R. ;
Dayan, Peter ;
Griffiths, Thomas L. ;
Lagnado, David A. .
PSYCHOLOGICAL REVIEW, 2017, 124 (03) :301-338
[6]   From covariation to causation: A test of the assumption of causal power [J].
Buehner, MJ ;
Cheng, PW ;
Clifford, D .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 2003, 29 (06) :1119-1140
[7]  
Centers for Disease Control, 2020, US COVID 19 CAS DEAT
[8]   From covariation to causation: A causal power theory [J].
Cheng, PW .
PSYCHOLOGICAL REVIEW, 1997, 104 (02) :367-405
[9]   Are samples drawn from Mechanical Turk valid for research on political ideology? [J].
Clifford, Scott ;
Jewell, Ryan M. ;
Waggoner, Philip D. .
RESEARCH & POLITICS, 2015, 2 (04)
[10]  
Cohen J., 1988, Statistical Power Analysis For The Behavioral Sciences, DOI [10.4324/9780203771587, DOI 10.4324/9780203771587]