On the role of crossmodal prediction in audiovisual emotion perception

被引:36
|
作者
Jessen, Sarah [1 ]
Kotz, Sonja A. [2 ,3 ]
机构
[1] Max Planck Inst Human Cognit & Brain Sci, Res Grp Early Social Dev, D-04103 Leipzig, Germany
[2] Max Planck Inst Human Cognit & Brain Sci, Dept Neuropsychol, Res Grp Subcort Contribut Comprehens, D-04103 Leipzig, Germany
[3] Univ Manchester, Sch Psychol Sci, Manchester, Lancs, England
来源
FRONTIERS IN HUMAN NEUROSCIENCE | 2013年 / 7卷
关键词
cross-modal prediction; emotion; multisensory; EEG; audiovisual; NEURONAL OSCILLATIONS; FACIAL EXPRESSIONS; BRAIN POTENTIALS; NEURAL PROCESSES; AUDITORY-CORTEX; VISUAL SPEECH; TIME-COURSE; INTEGRATION; BINDING; HUMANS;
D O I
10.3389/fnhum.2013.00369
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others' emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of cross-modal prediction. In emotion perception, as in most other settings, visual information precedes the auditory information. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, so far it has not been addressed in audiovisual emotion perception. Based on the current state of the art in (a) cross-modal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow more reliable predicting of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 EEG response and the duration of visual emotional, but not non-emotional information. If the assumption that emotional content allows more reliable predicting can be corroborated in future studies, cross-modal prediction is a crucial factor in our understanding of multisensory emotion perception.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] The role of emotion in dynamic audiovisual integration of faces and voices
    Kokinous, Jenny
    Kotz, Sonja A.
    Tavano, Alessandro
    Schroeger, Erich
    SOCIAL COGNITIVE AND AFFECTIVE NEUROSCIENCE, 2015, 10 (05) : 713 - 720
  • [2] Crossmodal interactions in audiovisual emotion processing
    Mueller, Veronika I.
    Cieslik, Edna C.
    Turetsky, Bruce I.
    Eickhoff, Simon B.
    NEUROIMAGE, 2012, 60 (01) : 553 - 561
  • [3] Crossmodal and incremental perception of audiovisual cues to emotional speech
    Barkhuysen, Pashiera
    Krahmer, Emiel
    Swerts, Marc
    LANGUAGE AND SPEECH, 2010, 53 : 3 - 30
  • [4] Prediction and constraint in audiovisual speech perception
    Peelle, Jonathan E.
    Sommers, Mitchell S.
    CORTEX, 2015, 68 : 169 - 181
  • [5] Crossmodal Integration Enhances Neural Representation of Task-Relevant Features in Audiovisual Face Perception
    Li, Yuanqing
    Long, Jinyi
    Huang, Biao
    Yu, Tianyou
    Wu, Wei
    Liu, Yongjian
    Liang, Changhong
    Sun, Pei
    CEREBRAL CORTEX, 2015, 25 (02) : 384 - 395
  • [6] Positive Emotion Facilitates Audiovisual Binding
    Kitamura, Miho S.
    Watanabe, Katsumi
    Kitagawa, Norimichi
    FRONTIERS IN INTEGRATIVE NEUROSCIENCE, 2016, 9
  • [7] Audiovisual emotion perception develops differently from audiovisual phoneme perception during childhood
    Yamamoto, Hisako W.
    Kawahara, Misako
    Tanaka, Akihiro
    PLOS ONE, 2020, 15 (06):
  • [8] Effects of an Audiovisual Emotion Perception Training for Schizophrenia: A Preliminary Study
    Jeong, Ji Woon
    Kim, Hyun Taek
    Lee, Seung-Hwan
    Lee, Hyejeen
    FRONTIERS IN PSYCHIATRY, 2021, 12
  • [9] Crossmodal transfer of emotion by music
    Logeswaran, Nidhya
    Bhattacharya, Joydeep
    NEUROSCIENCE LETTERS, 2009, 455 (02) : 129 - 133
  • [10] Classifying Schizotypy Using an Audiovisual Emotion Perception Test and Scalp Electroencephalography
    Jeong, Ji Woon
    Wendimagegn, Tariku W.
    Chang, Eunhee
    Chun, Yeseul
    Park, Joon Hyuk
    Kim, Hyoung Joong
    Kim, Hyun Taek
    FRONTIERS IN HUMAN NEUROSCIENCE, 2017, 11