共 117 条
Transformation of a temporal speech cue to a spatial neural code in human auditory cortex
被引:14
作者:
Fox, Neal P.
[1
]
Leonard, Matthew
[1
]
Sjerps, Matthias J.
[2
,3
]
Chang, Edward F.
[1
,4
]
机构:
[1] Univ Calif San Francisco, Dept Neurol Surg, San Francisco, CA 94143 USA
[2] Radboud Univ Nijmegen, Ctr Cognit Neuroimaging, Donders Inst Brain Cognit & Behav, Nijmegen, Netherlands
[3] Max Planck Inst Psycholinguist, Nijmegen, Netherlands
[4] Univ Calif San Francisco, Weill Inst Neurosci, San Francisco, CA 94143 USA
来源:
ELIFE
|
2020年
/
9卷
基金:
美国国家卫生研究院;
关键词:
VOICE-ONSET TIME;
LOCAL-FIELD POTENTIALS;
SPEAKING RATE;
INTERACTIVE ACTIVATION;
CATEGORICAL PERCEPTION;
RESPONSES;
INFORMATION;
DISCRIMINATION;
REPRESENTATION;
MODEL;
D O I:
10.7554/eLife.53051
中图分类号:
Q [生物科学];
学科分类号:
07 ;
0710 ;
09 ;
摘要:
In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to perceive discrete phonetic categories. Spectral cues are spatially encoded in the amplitude of responses in phonetically-tuned neural populations in auditory cortex. It remains unknown whether similar neurophysiological mechanisms encode temporal cues like voice-onset time (VOT), which distinguishes sounds like /b/ and/p/. We used direct brain recordings in humans to investigate the neural encoding of temporal speech cues with a VOT continuum from /ba/ to / pa/. We found that distinct neural populations respond preferentially to VOTs from one phonetic category, and are also sensitive to sub-phonetic VOT differences within a population's preferred category. In a simple neural network model, simulated populations tuned to detect either temporal gaps or coincidences between spectral cues captured encoding patterns observed in real neural data. These results demonstrate that a spatial/amplitude neural code underlies the cortical representation of both spectral and temporal speech cues.
引用
收藏
页数:28
相关论文