Event-related Potentials in Audio–Visual Cross-Modal Test for Comparison of Word Pairs

被引:1
|
作者
Nikishena I.S. [1 ]
Ponomarev V.A. [1 ]
Kropotov Y.D. [1 ]
机构
[1] Bechtereva Institute of the Human Brain, Russian Academy of Sciences, St. Petersburg
关键词
cross-modal interaction; event-related potentials; two-modal GO–NOGO paradigm; word’s phonological representation; working memory;
D O I
10.1134/S0362119721020109
中图分类号
学科分类号
摘要
Abstract: The study was aimed at investigating specificities of brain functioning when comparing verbal signals in cross-modal interaction. Event-related potentials (ERPs) were recorded in 166 healthy subjects in a two-modal, two-stimulus test in the GO–NOGO paradigm where the first stimulus was a visual presentation of the word, and the second stimulus was an auditory presentation of the word. It was shown that while the subjects were waiting for the spoken word to follow the printed word, an activation was recorded in the frontal-temporal area of their left hemisphere (F7, F3), presumably associated with the formation of the word’s phonological representation in the memory. Upon producing the second (spoken) word if it coincides with the visual presentation, the peak activation is recorded in the posterior temporal positions (Т5, Т6) within the interval of 370–500 ms; if the second stimulus does not coincide with the first one, the peak activation is recorded in the occipital positions (O1, O2) within the interval of 590–800 ms. These ERP fluctuations supposedly result from the processes of comparing the newly received auditory information with the phonological representation of the word stored in the working memory. © 2021, Pleiades Publishing, Inc.
引用
收藏
页码:459 / 466
页数:7
相关论文
共 50 条
  • [21] Audio-Visual Event Localization based on Cross-Modal Interacting Guidance
    Yue, Qiurui
    Wu, Xiaoyu
    Gao, Jiayi
    2021 IEEE FOURTH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2021), 2021, : 104 - 107
  • [22] Visual event-related potentials in a quantity comparison task
    Mueller, MP
    Pang, EW
    Case, R
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2000, : 104 - 104
  • [23] Program for Postprocessing Audio-Visual Event-Related Potentials (ERP)
    Garner, Christoph
    INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, 2021, 168 : S70 - S70
  • [24] Masked cross-modal repetition priming: An event-related potential investigation
    Kiyonaga, Kristi
    Grainger, Jonathan
    Midgley, Katherine
    Holcomb, Phillip J.
    LANGUAGE AND COGNITIVE PROCESSES, 2007, 22 (03): : 337 - 376
  • [25] An event-related potential study on cross-modal conversion of Chinese characters
    Wu, Qiuyan
    Deng, Yuan
    NEUROSCIENCE LETTERS, 2011, 503 (02) : 147 - 151
  • [26] An event-related potential study of cross-modal morphological and phonological priming
    Justus, Timothy
    Yang, Jennifer
    Larsen, Jary
    Davies, Paul de Mornay
    Swick, Diane
    JOURNAL OF NEUROLINGUISTICS, 2009, 22 (06) : 584 - 604
  • [27] Event-related potentials during the continuous visual memory test
    Retzlaff, PD
    Morris, GL
    JOURNAL OF CLINICAL PSYCHOLOGY, 1996, 52 (01) : 43 - 47
  • [28] Cross-Modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization
    Bao, Peijun
    Yang, Wenhan
    Boon Poh Ng
    Er, Meng Hwa
    Kot, Alex C.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 215 - 222
  • [29] Cross-Modal Relation-Aware Networks for Audio-Visual Event Localization
    Xu, Haoming
    Zeng, Runhao
    Wu, Qingyao
    Tan, Mingkui
    Gan, Chuang
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 3893 - 3901
  • [30] Cross-Modal Attention Network for Temporal Inconsistent Audio-Visual Event Localization
    Xuan, Hanyu
    Zhang, Zhenyu
    Chen, Shuo
    Yang, Jian
    Yan, Yan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 279 - 286