Interpreting encoding and decoding models

被引:104
作者
Kriegeskorte, Nikolaus [1 ]
Douglas, Pamela K. [2 ]
机构
[1] Columbia Univ, Dept Psychol, Zuckerman Mind Brain Behav Inst, Dept Neurosci,Dept Elect Engn, New York, NY 10027 USA
[2] Univ Calif Los Angeles, Ctr Cognit Neurosci, Los Angeles, CA USA
关键词
BRAIN ACTIVITY; CORTICAL REPRESENTATIONS; NEURAL REPRESENTATION; NATURAL IMAGES; INFORMATION; FMRI; PATTERNS; OBJECT; RECONSTRUCTION; ACTIVATION;
D O I
10.1016/j.conb.2019.04.002
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Encoding and decoding models are widely used in systems, cognitive, and computational neuroscience to make sense of brain-activity data. However, the interpretation of their results requires care. Decoding models can help reveal whether particular information is present in a brain region in a format the decoder can exploit. Encoding models make comprehensive predictions about representational spaces. In the context of sensory experiments, where stimuli are experimentally controlled, encoding models enable us to test and compare brain-computational theories. Encoding and decoding models typically include fitted linear-model components. Sometimes the weights of the fitted linear combinations are interpreted as reflecting, in an encoding model, the contribution of different sensory features to the representation or, in a decoding model, the contribution of different measured brain responses to a decoded feature. Such interpretations can be problematic when the predictor variables or their noise components are correlated and when priors (or penalties) are used to regularize the fit. Encoding and decoding models are evaluated in terms of their generalization performance. The correct interpretation depends on the level of generalization a model achieves (e.g. to new response measurements for the same stimuli, to new stimuli from the same population, or to stimuli from a different population). Significant decoding or encoding performance of a single model (at whatever level of generality) does not provide strong constraints for theory. Many models must be tested and inferentially compared for analyses to drive theoretical progress.
引用
收藏
页码:167 / 179
页数:13
相关论文
共 87 条
[1]   Information Dropout: Learning Optimal Representations Through Noisy Computation [J].
Achille, Alessandro ;
Soatto, Stefano .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (12) :2897-2905
[2]   Microstimulation of inferotemporal cortex influences face categorization [J].
Afraz, Seyed-Reza ;
Kiani, Roozbeh ;
Esteky, Hossein .
NATURE, 2006, 442 (7103) :692-695
[3]  
[Anonymous], 2015, BIORXIV, DOI DOI 10.1101/017418
[4]  
[Anonymous], BR J PHILOS SCI
[5]  
[Anonymous], BIORXIV
[6]  
[Anonymous], BIORXIV
[7]  
[Anonymous], COGNITIVE NEUROSCIEN
[8]  
[Anonymous], 2001, P 37 ALL C COMM CONT
[9]   Decoding Representations of Face Identity That are Tolerant to Rotation [J].
Anzellotti, Stefano ;
Fairhall, Scott L. ;
Caramazza, Alfonso .
CEREBRAL CORTEX, 2014, 24 (08) :1988-1995
[10]   Neural correlations, population coding and computation [J].
Averbeck, BB ;
Latham, PE ;
Pouget, A .
NATURE REVIEWS NEUROSCIENCE, 2006, 7 (05) :358-366