Words Jump-Start Vision: A Label Advantage in Object Recognition

被引:107
作者
Boutonnet, Bastien [1 ]
Lupyan, Gary [2 ]
机构
[1] Leiden Univ, Leiden Inst Brain & Cognit, NL-2300 RA Leiden, Netherlands
[2] Univ Wisconsin, Dept Psychol, Madison, WI 53706 USA
基金
美国国家科学基金会;
关键词
categorization; concepts; even-related potentials; language and thought; perception; representations; BRAINS ATTENTIONAL SET; CATEGORICAL PERCEPTION; TOP-DOWN; LANGUAGE; POTENTIALS; IMAGERY; REPRESENTATIONS; DEPLOYMENTS; MECHANISMS; COMPONENT;
D O I
10.1523/JNEUROSCI.5111-14.2015
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
People use language to shape each other's behavior in highly flexible ways. Effects of language are often assumed to be "high-level" in that, whereas language clearly influences reasoning, decision making, and memory, it does not influence low-level visual processes. Here, we test the prediction that words are able to provide top-down guidance at the very earliest stages of visual processing by acting as powerful categorical cues. We investigated whether visual processing of images of familiar animals and artifacts was enhanced after hearing their name (e.g., "dog") compared with hearing an equally familiar and unambiguous nonverbal sound (e.g., a dog bark) in 14 English monolingual speakers. Because the relationship between words and their referents is categorical, we expected words to deploy more effective categorical templates, allowing for more rapid visual recognition. By recording EEGs, we were able to determine whether this label advantage stemmed from changes to early visual processing or later semantic decision processes. The results showed that hearing a word affected early visual processes and that this modulation was specific to the named category. An analysis of ERPs showed that the P1 was larger when people were cued by labels compared with equally informative nonverbal cues-an enhancement occurring within 100 ms of image onset, which also predicted behavioral responses occurring almost 500 ms later. Hearing labels modulated the P1 such that it distinguished between target and nontarget images, showing that words rapidly guide early visual processing.
引用
收藏
页码:9329 / 9335
页数:7
相关论文
共 64 条
[1]   Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli [J].
Allison, T ;
Puce, A ;
Spencer, DD ;
McCarthy, G .
CEREBRAL CORTEX, 1999, 9 (05) :415-430
[2]  
[Anonymous], 1994, J Clin Neurophysiol, V11, P111
[3]  
[Anonymous], HOW WORDS MEAN
[4]  
Bates D., 2021, LME4 LINEAR MIXED EF
[5]   AN ALTERNATIVE METHOD FOR SIGNIFICANCE TESTING OF WAVE-FORM DIFFERENCE POTENTIALS [J].
BLAIR, RC ;
KARNISKI, W .
PSYCHOPHYSIOLOGY, 1993, 30 (05) :518-524
[6]  
Bloom Paul, 2002, How children learn the meanings of words, P1
[7]   Seeing Objects through the Language Glass [J].
Boutonnet, Bastien ;
Dering, Benjamin ;
Vinas-Guasch, Nestor ;
Thierry, Guillaume .
JOURNAL OF COGNITIVE NEUROSCIENCE, 2013, 25 (10) :1702-1710
[8]   Whatever next? Predictive brains, situated agents, and the future of cognitive science [J].
Clark, Andy .
BEHAVIORAL AND BRAIN SCIENCES, 2013, 36 (03) :181-204
[9]   Attention during natural vision warps semantic representation across the human brain [J].
Cukur, Tolga ;
Nishimoto, Shinji ;
Huth, Alexander G. ;
Gallant, Jack L. .
NATURE NEUROSCIENCE, 2013, 16 (06) :763-+
[10]   Interaction of top-down and bottom-up processing in the fast visual analysis of natural scenes [J].
Delorme, A ;
Rousselet, GA ;
Macé, MJM ;
Fabre-Thorpe, M .
COGNITIVE BRAIN RESEARCH, 2004, 19 (02) :103-113