Processing of incongruent emotional expressions in voice and semantics: The dominant modality and integration with facial expressions

被引:0
作者
Kikutani, Mariko [1 ]
Ikemoto, Machiko [2 ]
机构
[1] Kanazawa Univ, Inst Liberal Arts & Sci, Kakuma machi, Kanazawa, Ishikawa 9201192, Japan
[2] Doshisha Univ, Fac Psychol, Kyoto, Japan
关键词
Emotion; voice; semantic content; facial expressions; incongruent speech; SPONTANEOUS ATTENTION; PROSODY; COMMUNICATION; PERCEPTION; SPEECH; WORD; FACE; TONE;
D O I
10.1177/17470218251330422
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
This research concerns three channels for emotional communication: voice, semantics, and facial expressions. We used speech in which the emotion in voice and semantics did not match, and we investigated the dominant modality and how they interact with facial expressions. The study used voices emoting anger, happiness, or sadness while saying, "I'm angry," "I'm pleased," or "I'm sad." A facial image accompanied the voice, and it expressed either the same emotion to the voice (voice = face condition), the same emotion to the semantics (semantic = face condition), or a mixed emotion shown in the voice and semantics (morph condition). The phrases were articulated in the participants' native language (Japanese), second language (English), and unfamiliar language (Khmer). In Study 1, participants answered how much they agreed that the speaker expressed anger, happiness, and sadness. Their attention was not controlled. In Study 2, participants were told to attend to either voice or semantics. The morph condition of study 1 found semantic dominance for the native language stimuli. The semantic = face and voice = face conditions in Studies 1 and 2 revealed that an emotion solely expressed in semantics (while a different emotion was shown in face and voice) had more substantial impacts on assessing the speaker's emotion than an emotion solely expressed in voice when the semantics were in understandable languages.
引用
收藏
页数:16
相关论文
共 35 条
  • [1] [Anonymous], 1981, Silent messages: Implicit communication of emotions and attitude
  • [2] Brain potentials during semantic and prosodic processing in French
    Astésano, C
    Besson, M
    Alter, K
    [J]. COGNITIVE BRAIN RESEARCH, 2004, 18 (02): : 172 - 184
  • [3] Solving the emotion paradox: Categorization and the experience of emotion
    Barrett, LF
    [J]. PERSONALITY AND SOCIAL PSYCHOLOGY REVIEW, 2006, 10 (01) : 20 - 46
  • [4] Prosody and Semantics Are Separate but Not Separable Channels in the Perception of Emotional Speech: Test for Rating of Emotions in Speech
    Ben-David, Boaz M.
    Multani, Namita
    Shakuf, Vered
    Rudzicz, Frank
    van Lieshout, Pascal H. H. M.
    [J]. JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2016, 59 (01): : 72 - 89
  • [5] Audio-visual integration of emotion expression
    Collignon, Olivier
    Girard, Simon
    Gosselin, Frederic
    Roy, Sylvain
    Saint-Amour, Dave
    Lassonde, Maryse
    Lepore, Franco
    [J]. BRAIN RESEARCH, 2008, 1242 : 126 - 135
  • [6] The perception of emotions by ear and by eye
    de Gelder, B
    Vroomen, J
    [J]. COGNITION & EMOTION, 2000, 14 (03) : 289 - 311
  • [7] Doyle C. M., 2017, The science of facial expression., P415, DOI [10.1093/acprof:oso/9780190613501.003.0022, DOI 10.1093/ACPROF:OSO/9780190613501.003.0022]
  • [8] AN ARGUMENT FOR BASIC EMOTIONS
    EKMAN, P
    [J]. COGNITION & EMOTION, 1992, 6 (3-4) : 169 - 200
  • [9] More than words (and faces): evidence for a Stroop effect of prosody inemotion word processing
    Filippi, Piera
    Ocklenburg, Sebastian
    Bowling, Daniel L.
    Heege, Larissa
    Guentuerkuen, Onur
    Newen, Albert
    de Boer, Bart
    [J]. COGNITION & EMOTION, 2017, 31 (05) : 879 - 891
  • [10] Preattentive processing of audio-visual emotional signals
    Foecker, Julia
    Gondan, Matthias
    Roeder, Brigitte
    [J]. ACTA PSYCHOLOGICA, 2011, 137 (01) : 36 - 47