Reporting errors and biases in published empirical findings: Evidence from innovation research

被引:29
作者
Bruns, Stephan B. [1 ,2 ]
Asanov, Igor [3 ,4 ]
Bode, Rasmus [3 ,4 ]
Dunger, Melanie [2 ]
Funk, Christoph [9 ]
Hassan, Sherif M. [6 ,7 ,8 ]
Hauschildt, Julia [3 ]
Heinisch, Dominik [3 ,4 ]
Kempa, Karol [10 ]
Koenig, Johannes [3 ,4 ]
Lips, Johannes [9 ]
Verbeck, Matthias [5 ]
Wolfschuetz, Eva [3 ]
Buenstorf, Guido [3 ,4 ,11 ]
机构
[1] Hasselt Univ, Ctr Environm Sci, Hasselt, Belgium
[2] Univ Gottingen, Dept Econ, Gottingen, Germany
[3] Univ Kassel, Dept Econ, Kassel, Germany
[4] Univ Kassel, Int Ctr Higher Educ Res, Kassel, Germany
[5] Univ Marburg, Dept Econ, Marburg, Germany
[6] Univ Marburg, Ctr Near & Middle Eastern Studies, Marburg, Germany
[7] M&S Res Hub, Kassel, Germany
[8] Suez Canal Univ, Dept Econ, Ismailia, Egypt
[9] Univ Giessen, Dept Econ, Giessen, Germany
[10] Frankfurt Sch Finance & Management, Dept Econ, Frankfurt, Germany
[11] Univ Gothenburg, Inst Innovat & Entrepreneurship, Gothenburg, Sweden
关键词
Reporting bias; Reporting error; Innovation; p-hacking; Publication bias; Caliper test; PUBLICATION BIAS; SOCIAL-SCIENCES; TESTS;
D O I
10.1016/j.respol.2019.05.005
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
Errors and biases in published results compromise the reliability of empirical research, posing threats to the cumulative research process and to evidence-based decision making. We provide evidence on reporting errors and biases in innovation research. We find that 45% of the articles in our sample contain at least one result for which the provided statistical information is not consistent with reported significance levels. In 25% of the articles, at least one strong reporting error is diagnosed where a statistically non-significant finding becomes significant or vice versa using the common significance threshold of 0.1. The error rate at the test level is very small with 4.0% exhibiting any error and 1.4% showing strong errors. We also find systematically more marginally significant findings compared to marginally non-significant findings at the 0.05 and 0.1 thresholds of statistical significance. These discontinuities indicate the presence of reporting biases. Explorative analysis suggests that discontinuities are related to authors' affiliations and to a lesser extent the article's rank in the issue and the style of reporting.
引用
收藏
页数:13
相关论文
共 45 条
[1]   Indirect evidence of reporting biases was found in a survey of medical research studies [J].
Albarqouni, Loai N. ;
Lopez-Lopez, Jose A. ;
Higgins, Julian P. T. .
JOURNAL OF CLINICAL EPIDEMIOLOGY, 2017, 83 :57-64
[2]  
American Psychological Association, 2010, Publication Manual of the American Psychological Association, V6th
[3]  
[Anonymous], EC J
[4]  
[Anonymous], OXF B EC STAT
[5]  
[Anonymous], 2015, SCIENCE, DOI [DOI 10.1126/SCIENCE.AAC4716, DOI 10.1126/science.aac4716]
[6]  
[Anonymous], ENERGY EC
[7]  
Auspurg K, 2011, JAHRB NATL STAT, V231, P636
[8]   Retractions [J].
Azoulay, Pierre ;
Furman, Jeffrey L. ;
Krieger, Joshua L. ;
Murray, Fiona .
REVIEW OF ECONOMICS AND STATISTICS, 2015, 97 (05) :1118-1136
[9]   Wishful Thinking: Belief, Desire, and the Motivated Evaluation of Scientific Evidence [J].
Bastardi, Anthony ;
Uhlmann, Eric Luis ;
Ross, Lee .
PSYCHOLOGICAL SCIENCE, 2011, 22 (06) :731-732
[10]  
Berning CC, 2016, QUAL QUANT, V50, P901, DOI 10.1007/s11135-015-0182-4