Memory representations in a cross-modal matching task: evidence for a verbal component

被引:1
|
作者
Estabrooks, Katherine Marie [1 ]
Sohail, Muhammad Tayyab [1 ]
Song, Young In [1 ]
Desmarais, Genevieve [1 ]
机构
[1] Mt Allison Univ, Dept Psychol, Sackville, NB, Canada
来源
FRONTIERS IN PSYCHOLOGY | 2023年 / 14卷
关键词
multisensory integration; cognitive styles; memory representations; visual perception; haptic perception; object matching; OBJECT-SPATIAL IMAGERY; HAPTICS; VISION; CATEGORIZATION; INTERFERENCE; SIMILARITY; STYLES; TOUCH; EYE;
D O I
10.3389/fpsyg.2023.1253085
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
In everyday tasks, one often uses touch to find what has been seen. Recent research has identified that when individuals view or touch an object, they may create a verbal memory representation; however, this research involved object naming, which may have prompted the use of verbal strategies. Research has also identified variability in memory representations for objects, which may indicate individual differences. To investigate memory representations and their associations with individual differences in cognitive styles, we measured the cognitive styles of 127 participants and had them complete a non-verbal matching task without distractors, or with verbal or visual distractors. In the task, they viewed an object and then touched an object - or vice versa - and indicated whether the objects were the same or different. On trials where different objects were presented, participants responded consistently more slowly and made more matching errors for similar objects compared to distinct objects. Importantly, higher scores on the verbalizer cognitive style predicted faster reaction times on the matching task across all trial types and distraction conditions. Overall, this indicates that cross-modal object processing in short-term memory may be facilitated by a verbal code.
引用
收藏
页数:8
相关论文
共 45 条
  • [31] Dorsolateral prefrontal cortex bridges bilateral primary somatosensory cortices during cross-modal working memory
    Zhao, Di
    Ku, Yixuan
    BEHAVIOURAL BRAIN RESEARCH, 2018, 350 : 116 - 121
  • [32] Drawing enhances cross-modal memory plasticity in the human brain: a case study in a totally blind adult
    Likova, Lora T.
    FRONTIERS IN HUMAN NEUROSCIENCE, 2012, 6
  • [33] Cross-Modal Stimulus Conflict: The Behavioral Effects of Stimulus Input Timing in a Visual-Auditory Stroop Task
    Donohue, Sarah E.
    Appelbaum, Lawrence G.
    Park, Christina J.
    Roberts, Kenneth C.
    Woldorff, Marty G.
    PLOS ONE, 2013, 8 (04):
  • [34] Cross-modal and intra-modal binding between identity and location in spatial working memory: The identity of objects does not help recalling their locations
    Del Gatto, Claudia
    Brunetti, Riccardo
    Delogu, Franco
    MEMORY, 2016, 24 (05) : 603 - 615
  • [35] Prosody Dominates Over Semantics in Emotion Word Processing: Evidence From Cross-Channel and Cross-Modal Stroop Effects
    Lin, Yi
    Ding, Hongwei
    Zhang, Yang
    JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2020, 63 (03): : 896 - 912
  • [36] On verbal memory span in Chinese speakers: Evidence for employment of an articulation-resistant phonological component
    Baddeley, Alan D.
    Xu, Zhan
    Ho, Sai Tung
    Hitch, Graham J.
    JOURNAL OF MEMORY AND LANGUAGE, 2023, 129
  • [37] Cognitive Control in Cross-Modal Contexts: Abstract Feature Transitions of Task-Related But Not Task-Unrelated Stimuli Modulate the Congruency Sequence Effect
    Kelber, Paul
    Mackenzie, Ian Grant
    Mittelstaedt, Victor
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 2024, 50 (06) : 902 - 919
  • [38] Prediction of Learning Abilities Based on a Cross-Modal Evaluation of Non-verbal Mental Attributes Using Video-Game-Like Interfaces
    Laouris, Yiannis
    Aristodemou, Elena
    Makris, Pantelis
    CROSS-MODAL ANALYSIS OF SPEECH, GESTURES, GAZE AND FACIAL EXPRESSIONS, 2009, 5641 : 248 - 265
  • [39] Visual-Tactile Cross-Modal Data Generation Using Residue-Fusion GAN With Feature-Matching and Perceptual Losses
    Cai, Shaoyu
    Zhu, Kening
    Ban, Yuki
    Narumi, Takuji
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) : 7525 - 7532
  • [40] Hierarchy of Intra- and Cross-modal Redundancy Gains in Visuo-tactile Search: Evidence from the Posterior Contralateral Negativity
    Nasemann, Jan
    Toellner, Thomas
    Mueller, Hermann J.
    Shi, Zhuanghua
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2023, 35 (04) : 543 - 570