Cross-Modal Binding in Developmental Dyslexia

被引:25
|
作者
Jones, Manon W. [1 ]
Branigan, Holly P. [2 ]
Parra, Mario A. [2 ]
Logie, Robert H. [2 ]
机构
[1] Bangor Univ, Bangor, Gwynedd, Wales
[2] Univ Edinburgh, Dept Psychol, Edinburgh, Midlothian, Scotland
基金
英国经济与社会研究理事会;
关键词
cross-modal binding; dyslexia; spatial location; logit model; WORKING-MEMORY; PHONEME AWARENESS; BRAIN ACTIVATION; ATTENTION; INTEGRATION; FEATURES; OBJECT; CORTEX; WORD; INFORMATION;
D O I
10.1037/a0033334
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
The ability to learn visual-phonological associations is a unique predictor of word reading, and individuals with developmental dyslexia show impaired ability in learning these associations. In this study, we compared developmentally dyslexic and nondyslexic adults on their ability to form cross-modal associations (or "bindings") based on a single exposure to pairs of visual and phonological features. Reading groups were therefore compared on the very early stages of associative learning. We used a working memory framework including experimental designs used to investigate cross-modal binding. Two change-detection experiments showed a group discrepancy in binding that was dependent on spatial location encoding: Whereas group performance was similar when location was an inconsistent cue (Experiment 1), nondyslexic readers showed higher accuracy in binding than dyslexics when location was a consistent cue (Experiment 2). A cued-recall task confirmed that location information discriminates binding ability between reading groups in a more explicit memory recall task (Experiment 3). Our results show that recall for ephemeral cross-modal bindings is supported by location information in nondyslexics, but this information cannot be used to similar effect in dyslexic readers. Our findings support previous demonstrations of cross-modal association difficulty in dyslexia and show that a group discrepancy exists even in a single, initial presentation of visual-phonological pairs. Effective use of location information as a retrieval cue is one mechanism that discriminates reading groups, which may contribute to the longer term cross-modal association problems characteristic of dyslexia.
引用
收藏
页码:1807 / 1822
页数:16
相关论文
共 50 条
  • [21] Cross-modal nonspatial repetition inhibition: An ERP study
    Wu, Xiaogang
    Wang, Aijun
    Zhang, Ming
    NEUROSCIENCE LETTERS, 2020, 734
  • [22] A cross-modal investigation of the neural substrates for ongoing cognition
    Wang, Megan
    He, Biyu J.
    FRONTIERS IN PSYCHOLOGY, 2014, 5
  • [23] Unveiling passive cross-modal reactivation and validation processes in the processing of multimedia material
    Schueler, Anne
    Frick, Pauline
    LEARNING AND INSTRUCTION, 2025, 97
  • [24] Developmental plasticity of multisensory circuitry: how early experience dictates cross-modal interactions
    Sarko, Diana K.
    Ghose, Dipanwita
    JOURNAL OF NEUROPHYSIOLOGY, 2012, 108 (11) : 2863 - 2866
  • [25] Cross-Modal Transformers for Infrared and Visible Image Fusion
    Park, Seonghyun
    Vien, An Gia
    Lee, Chul
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (02) : 770 - 785
  • [26] Cross-Modal Competition: The Default Computation for Multisensory Processing
    Yu, Liping
    Cuppini, Cristiano
    Xu, Jinghong
    Rowland, Benjamin A.
    Stein, Barry E.
    JOURNAL OF NEUROSCIENCE, 2019, 39 (08) : 1374 - 1385
  • [27] Cross-Modal Stimulus Conflict: The Behavioral Effects of Stimulus Input Timing in a Visual-Auditory Stroop Task
    Donohue, Sarah E.
    Appelbaum, Lawrence G.
    Park, Christina J.
    Roberts, Kenneth C.
    Woldorff, Marty G.
    PLOS ONE, 2013, 8 (04):
  • [28] Attentional cueing by cross-modal congruency produces both facilitation and inhibition on short-term visual recognition
    Makovac, Elena
    Kwok, Sze Chai
    Gerbino, Walter
    ACTA PSYCHOLOGICA, 2014, 152 : 75 - 83
  • [29] Cross-modal links in spatial attention
    Driver, J
    Spence, C
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 1998, 353 (1373) : 1319 - 1331
  • [30] Deep Cross-Modal Age Estimation
    Aminian, Ali
    Noubir, Guevara
    ADVANCES IN COMPUTER VISION, CVC, VOL 1, 2020, 943 : 159 - 177