How much baseline correction do we need in ERP research? Extended GLM model can replace baseline correction while lifting its limits

被引:95
|
作者
Alday, Phillip M. [1 ]
机构
[1] Max Planck Inst Psycholinguist, Postbus 310, NL-6500 AH Nijmegen, Netherlands
关键词
analysis; statistical methods; EEG; ERPs; oscillation; time frequency analyses; HIGH-PASS FILTERS; REGRESSION-BASED ESTIMATION; INCORRECT CONCLUSIONS; POWER CALCULATIONS; LANGUAGE; RELIABILITY; ARTIFACTS; FALLACY;
D O I
10.1111/psyp.13451
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Baseline correction plays an important role in past and current methodological debates in ERP research (e.g., the Tanner vs. Maess debate in the Journal of Neuroscience Methods), serving as a potential alternative to strong high-pass filtering. However, the very assumptions that underlie traditional baseline also undermine it, implying a reduction in the signal-to-noise ratio. In other words, traditional baseline correction is statistically unnecessary and even undesirable. Including the baseline interval as a predictor in a GLM-based statistical approach allows the data to determine how much baseline correction is needed, including both full traditional and no baseline correction as special cases. This reduces the amount of variance in the residual error term and thus has the potential to increase statistical power.
引用
收藏
页数:14
相关论文
empty
未找到相关数据