Cortical Tracking of Speech-in-Noise Develops from Childhood to Adulthood

被引:37
作者
Vander Ghinst, Marc [1 ,2 ]
Bourguignon, Mathieu [1 ,3 ,5 ]
Niesen, Maxime [1 ,2 ]
Wens, Vincent [1 ,4 ]
Hassid, Sergio [2 ]
Choufani, Georges [2 ]
Jousmaki, Oveildco [6 ,7 ]
Hari, Riitta [8 ]
Goldman, Serge [1 ,4 ]
De Tiege, Xavier [1 ,4 ]
机构
[1] Univ Libre Bruxelles, UNI ULB, Neurosci Inst, Lab Cartog Fonct Cerveau, B-1070 Brussels, Belgium
[2] Univ Libre Bruxelles, Serv ORL & Chirurg Cervicofaciale, CUB Hop Erasme, B-1070 Brussels, Belgium
[3] Univ Libre Bruxelles, CUB Hop Erasme, Lab Cognit Langage & Dev, UNI ULB Neurosci Inst, B-1070 Brussels, Belgium
[4] Univ Libre Bruxelles, CUB Hop Erasme, Serv Nucl Med, Dept Funct Neuroimaging, B-1070 Brussels, Belgium
[5] Basque Ctr Cognit Brain & Language BCBL, Donostia San Sebastian 20009, Spain
[6] Aalto Univ, Sch Sci, Dept Neurosci & Biomed Engn, Aalto NeuroImaging, Espoo 00076, Finland
[7] Nanyang Technol Univ, Cognit Neuroimaging Ctr, Lee Kong Chian Sch Med, Singapore 636921, Singapore
[8] Aalto Univ, Sch Arts Design & Architecture, Dept Art, Helsinki 00076, Finland
关键词
coherence analysis; magnetoencephalography; speech-in-noise; INFORMATIONAL MASKING; NEURONAL OSCILLATIONS; SELECTIVE ATTENTION; ATTENDED SPEECH; AUDITORY-CORTEX; COCKTAIL-PARTY; CHILDREN; MEG; BRAIN; IDENTIFICATION;
D O I
10.1523/JNEUROSCI.1732-18.2019
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
In multitalker backgrounds, the auditory cortex of adult humans tracks the attended speech stream rather than the global auditory scene. Still, it is unknown whether such preferential tracking also occurs in children whose speech-in-noise (SiN) abilities are typically lower compared with adults. We used magnetoencephalography (MEG) to investigate the frequency-specific cortical tracking of different elements of a cocktail party auditory scene in 20 children (age range, 6-9 years; 8 females) and 20 adults (age range, 21-40 years; 10 females). DuringMEGrecordings, subjects attended to four different 5 min stories, mixed with different levels of multitalker background at four signal-to-noise ratios (SNRs; noiseless, +5, 0, and -5 dB). Coherence analysis quantified the coupling between the time courses of the MEG activity and attended speech stream, multitalker background, or global auditory scene, respectively. In adults, statistically significant coherence was observed between MEG signals originating from the auditory system and the attended stream at <1, 1-4, and 4-8 Hz in all SNR conditions. Children displayed similar coupling at <1 and 1-4 Hz, but increasing noise impaired the coupling more strongly than in adults. Also, children displayed drastically lower coherence at 4-8 Hz in all SNR conditions. These results suggest that children's difficulties to understand speech in noisy conditions are related to an immature selective cortical tracking of the attended speech streams. Our results also provide unprecedented evidence for an acquired cortical tracking of speech at syllable rate and argue for a progressive development of SiN abilities in humans.
引用
收藏
页码:2938 / 2950
页数:13
相关论文
共 50 条
[41]   The Long-Term Effect of Childhood Otitis Media on Speech-in-Noise Testing at Ages 9 and 13 [J].
Reijers, Stefanie N. H. ;
Vroegop, Jantien L. ;
Kremer, Bernd ;
van Der Schroeff, Marc P. .
LARYNGOSCOPE INVESTIGATIVE OTOLARYNGOLOGY, 2025, 10 (01)
[42]   Pilot Study of Audiovisual Speech-In-Noise (SIN) Performance of Young Adults with ADHD [J].
Jayawardena, Gavindya ;
Michalek, Anne M. P. ;
Duchowski, Andrew T. ;
Jayarathna, Sampath .
ETRA 2020 SHORT PAPERS: ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, 2020,
[43]   Individual differences in the perception of phonetic category structure predict speech-in-noise performance [J].
Myers, Emily ;
Phillips, Matthew ;
Skoe, Erika .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2024, 156 (03) :1707-1719
[44]   Cortical alpha oscillations in cochlear implant users reflect subjective listening effort during speech-in-noise perception [J].
Paul, Brandon T. ;
Chen, Joseph ;
Le, Trung ;
Lin, Vincent ;
Dimitrijevic, Andrew .
PLOS ONE, 2021, 16 (07)
[45]   Speech-Driven Facial Animations Improve Speech-in-Noise Comprehension of Humans [J].
Varano, Enrico ;
Vougioukas, Konstantinos ;
Ma, Pingchuan ;
Petridis, Stavros ;
Pantic, Maja ;
Reichenbach, Tobias .
FRONTIERS IN NEUROSCIENCE, 2022, 15
[46]   Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals [J].
Andeol, Guillaume ;
Paraouty, Nihaad ;
Giraudet, Fabrice ;
Wallaert, Nicolas ;
Isnard, Vincent ;
Moulin, Annie ;
Suied, Clara .
BIOLOGY-BASEL, 2024, 13 (06)
[47]   Exploring the sensitivity of speech-in-noise tests for noise-induced hearing loss [J].
Jansen, Sofie ;
Luts, Heleen ;
Dejonckere, Philippe ;
van Wieringen, Astrid ;
Wouters, Jan .
INTERNATIONAL JOURNAL OF AUDIOLOGY, 2014, 53 (03) :199-205
[48]   Tracking of Cardiorespiratory Fitness from Childhood to Mid-adulthood [J].
Guo, Jia ;
Fraser, Brooklyn J. ;
Blizzard, Leigh ;
Schmidt, Michael D. ;
Dwyer, Terence ;
Venn, Alison J. ;
Magnussen, Costan G. .
JOURNAL OF PEDIATRICS, 2024, 264
[49]   TRACKING OF PHYSICAL ACTIVITY AND CARDIORESPIRATORY FITNESS FROM CHILDHOOD TO ADULTHOOD [J].
Soric, Maroje .
PAEDIATRIA CROATICA, 2012, 56 (04) :349-353
[50]   Clinical Assessment of Functional Hearing Deficits: Speech-in-Noise Performance [J].
Phatak, Sandeep A. ;
Brungart, Douglas S. ;
Zion, Danielle J. ;
Grant, Ken W. .
EAR AND HEARING, 2019, 40 (02) :426-436