Audio-visual speech perception in adult readers with dyslexia: an fMRI study

被引:25
|
作者
Ruesseler, Jascha [1 ]
Ye, Zheng [2 ]
Gerth, Ivonne [3 ]
Szycik, Gregor R. [4 ]
Muente, Thomas F. [5 ,6 ]
机构
[1] Otto Friedrich Univ Bamberg, Dept Psychol, Bamberg, Germany
[2] Chinese Acad Sci, Inst Psychol, Beijing, Peoples R China
[3] Klinikum Magdeburg, Neurol, Magdeburg, Germany
[4] Hannover Med Sch, Dept Psychiat, Hannover, Germany
[5] Univ Lubeck, Dept Neurol, Ratzeburger Allee 160, D-23562 Lubeck, Germany
[6] Univ Lubeck, Inst Psychol 2, Lubeck, Germany
关键词
Developmental dyslexia; Audio-visual processing; Event-related fMRI; Independent component analysis; INDEPENDENT COMPONENT ANALYSIS; AUDITORY-VISUAL SPEECH; MULTISENSORY INTEGRATION; DEVELOPMENTAL DYSLEXIA; READING-DISABILITY; HEARING LIPS; NEURAL BASIS; SCHIZOPHRENIA; BRAIN; NETWORKS;
D O I
10.1007/s11682-017-9694-y
中图分类号
R445 [影像诊断学];
学科分类号
100207 ;
摘要
Developmental dyslexia is a specific deficit in reading and spelling that often persists into adulthood. In the present study, we used slow event-related fMRI and independent component analysis to identify brain networks involved in perception of audio-visual speech in a group of adult readers with dyslexia (RD) and a group of fluent readers (FR). Participants saw a video of a female speaker saying a disyllabic word. In the congruent condition, audio and video input were identical whereas in the incongruent condition, the two inputs differed. Participants had to respond to occasionally occurring animal names. The independent components analysis (ICA) identified several components that were differently modulated in FR and RD. Two of these components including fusiform gyrus and occipital gyrus showed less activation in RD compared to FR possibly indicating a deficit to extract face information that is needed to integrate auditory and visual information in natural speech perception. A further component centered on the superior temporal sulcus (STS) also exhibited less activation in RD compared to FR. This finding is corroborated in the univariate analysis that shows less activation in STS for RD compared to FR. These findings suggest a general impairment in recruitment of audiovisual processing areas in dyslexia during the perception of natural speech.
引用
收藏
页码:357 / 368
页数:12
相关论文
共 50 条
  • [21] A manually denoised audio-visual movie watching fMRI dataset for the studyforrest project
    Liu, Xingyu
    Zhen, Zonglei
    Yang, Anmin
    Bai, Haohao
    Liu, Jia
    SCIENTIFIC DATA, 2019, 6 (1)
  • [22] Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level
    Kolozsvari, Orsolya B.
    Xu, Weiyong
    Leppanen, Paavo H. T.
    Hamalainen, Jarmo A.
    FRONTIERS IN HUMAN NEUROSCIENCE, 2019, 13
  • [23] Audio-Visual Predictive Processing in the Perception of Humans and Robots
    Sarigul, Busra
    Urgen, Burcu A.
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2023, 15 (05) : 855 - 865
  • [24] Cortical integration of audio-visual speech and non-speech stimuli
    Wyk, Brent C. Vander
    Ramsay, Gordon J.
    Hudac, Caitlin M.
    Jones, Warren
    Lin, David
    Klin, Ami
    Lee, Su Mei
    Pelphrey, Kevin A.
    BRAIN AND COGNITION, 2010, 74 (02) : 97 - 106
  • [25] Audio-visual temporal perception in children with restored hearing
    Gori, Monica
    Chilosi, Anna
    Forli, Francesca
    Burr, David
    NEUROPSYCHOLOGIA, 2017, 99 : 350 - 359
  • [26] Investigating human audio-visual object perception with a combination of hypothesis-generating and hypothesis-testing fMRI analysis tools
    Marcus J. Naumer
    Jasper J. F. van den Bosch
    Michael Wibral
    Axel Kohler
    Wolf Singer
    Jochen Kaiser
    Vincent van de Ven
    Lars Muckli
    Experimental Brain Research, 2011, 213 : 309 - 320
  • [27] Atypical delta-band phase consistency and atypical preferred phase in children with dyslexia during neural entrainment to rhythmic audio-visual speech
    Keshavarzi, Mahmoud
    Mandke, Kanad
    Macfarlane, Annabel
    Parvez, Lyla
    Gabrielczyk, Fiona
    Wilson, Angela
    Goswami, Usha
    NEUROIMAGE-CLINICAL, 2022, 35
  • [28] An Overview of Deep-Learning-Based Audio-Visual Speech Enhancement and Separation
    Michelsanti, Daniel
    Tan, Zheng-Hua
    Zhang, Shi-Xiong
    Xu, Yong
    Yu, Meng
    Yu, Dong
    Jensen, Jesper
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 1368 - 1396
  • [29] Cortical operational synchrony during audio-visual speech integration
    Fingelkurts, AA
    Fingelkurts, AA
    Krause, CM
    Möttönen, R
    Sams, M
    BRAIN AND LANGUAGE, 2003, 85 (02) : 297 - 312
  • [30] Audio-Visual Speech Timing Sensitivity Is Enhanced in Cluttered Conditions
    Roseboom, Warrick
    Nishida, Shin'ya
    Fujisaki, Waka
    Arnold, Derek H.
    PLOS ONE, 2011, 6 (04):