The Perils and Pitfalls of Block Design for EEG Classification Experiments

被引:61
作者
Li, Ren [1 ]
Johansen, Jared S. [1 ]
Ahmed, Hamad [1 ]
Ilyevsky, Thomas, V [1 ]
Wilbur, Ronnie B. [2 ,3 ]
Bharadwaj, Hari M. [2 ,4 ]
Siskind, Jeffrey Mark [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
[2] Purdue Univ, Dept Speech Language & Hearing Sci, W Lafayette, IN 47907 USA
[3] Purdue Univ, Dept Linguist, W Lafayette, IN 47907 USA
[4] Purdue Univ, Weldon Sch Biomed Engn, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
Object classification; EEG; neuroimaging;
D O I
10.1109/TPAMI.2020.2973153
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A recent paper [1] claims to classify brain processing evoked in subjects watching ImageNet stimuli as measured with EEG and to employ a representation derived from this processing to construct a novel object classifier. That paper, together with a series of subsequent papers [2], [3], [4], [5], [6], [7], [8], claims to achieve successful results on a wide variety of computer-vision tasks, including object classification, transfer learning, and generation of images depicting human perception and thought using brain-derived representations measured through EEG. Our novel experiments and analyses demonstrate that their results crucially depend on the block design that they employ, where all stimuli of a given class are presented together, and fail with a rapid-event design, where stimuli of different classes are randomly intermixed. The block design leads to classification of arbitrary brain states based on block-level temporal correlations that are known to exist in all EEG data, rather than stimulus-related activity. Because every trial in their test sets comes from the same block as many trials in the corresponding training sets, their block design thus leads to classifying arbitrary temporal artifacts of the data instead of stimulus-related activity. This invalidates all subsequent analyses performed on this data in multiple published papers and calls into question all of the reported results. We further show that a novel object classifier constructed with a random codebook performs as well as or better than a novel object classifier constructed with the representation extracted from EEG data, suggesting that the performance of their classifier constructed with a representation extracted from EEG data does not benefit from the brain-derived representation. Together, our results illustrate the far-reaching implications of the temporal autocorrelations that exist in all neuroimaging data for classification experiments. Further, our results calibrate the underlying difficulty of the tasks involved and caution against overly optimistic, but incorrect, claims to the contrary.
引用
收藏
页码:316 / 333
页数:18
相关论文
共 36 条
[1]  
[Anonymous], 2016, ARXIV160900344
[2]  
[Anonymous], 2016, P INT C LEARN REPR
[3]  
[Anonymous], P 2009 IEEE C COMPUT, DOI DOI 10.1109/CVPR.2009.5206557
[4]  
[Anonymous], 2014, Conference Track Proceedings, Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 1416 April 2014
[5]  
[Anonymous], 2011, J VISION
[6]  
Barbu A, 2014, LECT NOTES COMPUT SC, V8693, P612, DOI 10.1007/978-3-319-10602-1_40
[7]   Brain Activity-Based Image Classification From Rapid Serial Visual Presentation [J].
Bigdely-Shamlo, Nima ;
Vankov, Andrey ;
Ramirez, Rey R. ;
Makeig, Scott .
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2008, 16 (05) :432-441
[8]  
Bullmore ET, 2001, HUM BRAIN MAPP, V12, P61, DOI 10.1002/1097-0193(200102)12:2<61::AID-HBM1004>3.0.CO
[9]  
2-W
[10]   Convolutional Neural Networks for P300 Detection with Application to Brain-Computer Interfaces [J].
Cecotti, Hubert ;
Graeser, Axel .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (03) :433-445