Audio-visual facilitation of the mu rhythm

被引:25
|
作者
McGarry, Lucy M. [1 ]
Russo, Frank A. [1 ]
Schalles, Matt D. [2 ]
Pineda, Jaime A. [2 ,3 ]
机构
[1] Ryerson Univ, Dept Psychol, Toronto, ON M5B 2K3, Canada
[2] Univ Calif San Diego, Dept Cognit Sci, La Jolla, CA 92037 USA
[3] Univ Calif San Diego, Neurosci Grp, La Jolla, CA 92037 USA
基金
加拿大自然科学与工程研究理事会;
关键词
Mu rhythm; Mirror neuron system; Multimodal facilitation; Independent components analysis; GRASP REPRESENTATIONS; MOTOR FACILITATION; EEG; RECOGNITION; ACTIVATION; HUMANS; CORTEX; PERCEPTION; COMPONENT; PREMOTOR;
D O I
10.1007/s00221-012-3046-3
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Previous studies demonstrate that perception of action presented audio-visually facilitates greater mirror neuron system (MNS) activity in humans (Kaplan and Iacoboni in Cogn Process 8(2):103-113, 2007) and non-human primates (Keysers et al. in Exp Brain Res 153(4):628-636, 2003) than perception of action presented unimodally. In the current study, we examined whether audio-visual facilitation of the MNS can be indexed using electroencephalography (EEG) measurement of the mu rhythm. The mu rhythm is an EEG oscillation with peaks at 10 and 20 Hz that is suppressed during the execution and perception of action and is speculated to reflect activity in the premotor and inferior parietal cortices as a result of MNS activation (Pineda in Behav Brain Funct 4(1):47, 2008). Participants observed experimental stimuli unimodally (visual-alone or audio-alone) or bimodally during randomized presentations of two hands ripping a sheet of paper, and a control video depicting a box moving up and down. Audio-visual perception of action stimuli led to greater event-related desynchrony (ERD) of the 8-13 Hz mu rhythm compared to unimodal perception of the same stimuli over the C3 electrode, as well as in a left central cluster when data were examined in source space. These results are consistent with Kaplan and Iacoboni's (in Cogn Process 8(2):103-113, 2007), findings that indicate audio-visual facilitation of the MNS; our left central cluster was localized approximately 13.89 mm away from the ventral premotor cluster identified in their fMRI study, suggesting that these clusters originate from similar sources. Consistency of results in electrode space and component space support the use of ICA as a valid source localization tool.
引用
收藏
页码:527 / 538
页数:12
相关论文
共 50 条
  • [31] Talker variability in audio-visual speech perception
    Heald, Shannon L. M.
    Nusbaum, Howard C.
    FRONTIERS IN PSYCHOLOGY, 2014, 5
  • [32] The Fungible Audio-Visual Mapping and its Experience
    Sa, Adriana
    Caramiaux, Baptiste
    Tanaka, Atau
    JOURNAL OF SCIENCE AND TECHNOLOGY OF THE ARTS, 2014, 6 (01) : 85 - 96
  • [33] Audio-Visual Feature Fusion for Speaker Identification
    Almaadeed, Noor
    Aggoun, Amar
    Amira, Abbes
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT I, 2012, 7663 : 56 - 67
  • [34] Audio-Visual Automatic Group Affect Analysis
    Sharma, Garima
    Dhall, Abhinav
    Cai, Jianfei
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (02) : 1056 - 1069
  • [35] The Audio-Visual Arabic Dataset for Natural Emotions
    Abu Shaqra, Ftoon
    Duwairi, Rehab
    Al-Ayyoub, Mahmoud
    2019 7TH INTERNATIONAL CONFERENCE ON FUTURE INTERNET OF THINGS AND CLOUD (FICLOUD 2019), 2019, : 324 - 329
  • [36] Audio-Visual Interactions in Product Sound Design
    Ozcan, Elif
    van Egmond, Rene
    HUMAN VISION AND ELECTRONIC IMAGING XV, 2010, 7527
  • [37] Effects of denotative congruency on audio-visual impressions
    Masakura, Yuko
    Ichikawa, Makoto
    JAPANESE PSYCHOLOGICAL RESEARCH, 2011, 53 (04) : 415 - 425
  • [38] Audio-visual word prominence detection from clean and noisy speech
    Heckmann, Martin
    COMPUTER SPEECH AND LANGUAGE, 2018, 48 : 15 - 30
  • [39] Cortical integration of audio-visual speech and non-speech stimuli
    Wyk, Brent C. Vander
    Ramsay, Gordon J.
    Hudac, Caitlin M.
    Jones, Warren
    Lin, David
    Klin, Ami
    Lee, Su Mei
    Pelphrey, Kevin A.
    BRAIN AND COGNITION, 2010, 74 (02) : 97 - 106
  • [40] Voice over: Audio-visual congruency and content recall in the gallery setting
    Fairhurst, Merle T.
    Scott, Minnie
    Deroy, Ophelia
    PLOS ONE, 2017, 12 (06):