Cortical Tracking of Continuous Speech Under Bimodal Divided Attention

被引:6
作者
Xie, Zilong [1 ]
Brodbeck, Christian [2 ]
Chandrasekaran, Bharath [3 ]
机构
[1] Florida State Univ, Sch Commun Sci & Disorders, Tallahassee, FL 32306 USA
[2] Univ Connecticut, Dept Psychol Sci, Storrs, CT USA
[3] Univ Pittsburgh, Dept Commun Sci & Disorders, Pittsburgh, PA 15260 USA
来源
NEUROBIOLOGY OF LANGUAGE | 2023年 / 4卷 / 02期
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
acoustic processing; continuous speech; crossmodal; divided attention; EEG; linguistic processing; VISUAL PERCEPTUAL LOAD; COGNITIVE LOAD; SELECTIVE ATTENTION; BRAIN-STEM; RESPONSES; CAPACITY; CORTEX; MEMORY; EEG; ERP;
D O I
10.1162/nol_a_00100
中图分类号
H0 [语言学];
学科分类号
030303 ; 0501 ; 050102 ;
摘要
Speech processing often occurs amid competing inputs from other modalities, for example, listening to the radio while driving. We examined the extent to which dividing attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. We recorded electroencephalographic (EEG) responses when human participants performed a challenging primary visual task, imposing low or high cognitive load while listening to audiobook stories as a secondary task. The two dual-task conditions were contrasted with an auditory single-task condition in which participants attended to stories while ignoring visual stimuli. Behaviorally, the high load dual-task condition was associated with lower speech comprehension accuracy relative to the other two conditions. We fitted multivariate temporal response function encoding models to predict EEG responses from acoustic and linguistic speech features at different representation levels, including auditory spectrograms and information-theoretic models of sublexical-, word-form-, and sentence-level representations. Neural tracking of most acoustic and linguistic features remained unchanged with increasing dual-task load, despite unambiguous behavioral and neural evidence of the high load dual-task condition being more demanding. Compared to the auditory single-task condition, dual-task conditions selectively reduced neural tracking of only some acoustic and linguistic features, mainly at latencies >200 ms, while earlier latencies were surprisingly unaffected. These findings indicate that behavioral effects of bimodal divided attention on continuous speech processing occur not because of impaired early sensory representations but likely at later cognitive processing stages. Crossmodal attention-related mechanisms may not be uniform across different speech processing levels.
引用
收藏
页码:318 / 343
页数:26
相关论文
共 72 条
[1]   Separate attentional resources for vision and audition [J].
Alais, D ;
Morrone, C ;
Burr, D .
PROCEEDINGS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2006, 273 (1592) :1339-1345
[2]  
[Anonymous], 2014, R LANG ENV STAT COMP, V2014
[3]   Vision and audition do not share attentional resources in sustained tasks [J].
Arrighi, Roberto ;
Lunardi, Roy ;
Burr, David .
FRONTIERS IN PSYCHOLOGY, 2011, 2
[4]   Listeners modulate temporally selective attention during natural speech processing [J].
Astheimer, Lori B. ;
Sanders, Lisa D. .
BIOLOGICAL PSYCHOLOGY, 2009, 80 (01) :23-34
[5]   Attentional Enhancement of Auditory Mismatch Responses: a DCM/MEG Study [J].
Auksztulewicz, Ryszard ;
Friston, Karl .
CEREBRAL CORTEX, 2015, 25 (11) :4273-4283
[6]   CONTROLLING THE FALSE DISCOVERY RATE - A PRACTICAL AND POWERFUL APPROACH TO MULTIPLE TESTING [J].
BENJAMINI, Y ;
HOCHBERG, Y .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 1995, 57 (01) :289-300
[7]   Musical Training Orchestrates Coordinated Neuroplasticity in Auditory Brainstem and Cortex to Counteract Age-Related Declines in Categorical Vowel Perception [J].
Bidelman, Gavin M. ;
Alain, Claude .
JOURNAL OF NEUROSCIENCE, 2015, 35 (03) :1240-1249
[8]  
Brain Products, 2023, ACTICHAMP PYC
[9]  
Brain Products, 2023, ACTICAP
[10]  
Broadbent D.E., 1958, PERCEPTION COMMUNICA, DOI [10.1037/10037-000, DOI 10.1037/10037-000]