Audiovisual emotion perception develops differently from audiovisual phoneme perception during childhood

被引:2
|
作者
Yamamoto, Hisako W. [1 ,2 ]
Kawahara, Misako [1 ,2 ]
Tanaka, Akihiro [1 ]
机构
[1] Tokyo Womans Christian Univ, Suginami Ku, Tokyo, Japan
[2] Japan Soc Promot Sci, Chiyoda Ku, Tokyo, Japan
来源
PLOS ONE | 2020年 / 15卷 / 06期
关键词
VISUAL SPEECH; RECOGNITION; VOICE; FACE;
D O I
10.1371/journal.pone.0234553
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
This study investigated the developmental paths in the use of audiovisual information for the perception of emotions and phonemes by Japanese speakers. Children aged 5 to 12 years and adults aged 30 to 39 years engaged in an emotion perception task in which speakers expressed their emotions through their faces and voices, and a phoneme perception task using phonemic information in speakers' lip movements and speech sounds. Results indicated that Japanese children's judgement of emotions by using auditory information increased with increasing age, whereas the use of audiovisual information for judging phonemes remained constant with increasing age. Moreover, adults were affected by visual information more than children. We discuss whether these differences in developmental patterns are due to differential integration processes for information indicative of emotions and phonemes, as well as possible cultural / linguistic reasons for these differences.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Distorted visual information influences audiovisual perception of voicing
    Eg, Ragnhild
    Behne, Dawn
    INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, VOLS 1-5, 2009, : 2871 - 2874
  • [32] Modeling the Development of Audiovisual Cue Integration in Speech Perception
    Getz, Laura M.
    Nordeen, Elke R.
    Vrabic, Sarah C.
    Toscano, Joseph C.
    BRAIN SCIENCES, 2017, 7 (03):
  • [33] Perception of Audiovisual Speech Produced by Human and Virtual Speaker
    Aller, Sven
    Meister, Einar
    HUMAN LANGUAGE TECHNOLOGIES - THE BALTIC PERSPECTIVE, 2016, 289 : 31 - 38
  • [34] Timing in audiovisual speech perception: A mini review and new psychophysical data
    Venezia, Jonathan H.
    Thurman, Steven M.
    Matchin, William
    George, Sahara E.
    Hickok, Gregory
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2016, 78 (02) : 583 - 601
  • [35] Perception of Incongruent Audiovisual Speech: Distribution of Modality-Specific Responses
    Sandhya
    Vinay
    Manchaiah, V
    AMERICAN JOURNAL OF AUDIOLOGY, 2021, 30 (04) : 968 - 979
  • [36] Sequential audiovisual interactions during speech perception: A whole-head MEG study
    Hertrich, Ingo
    Mathiak, Klaus
    Lutzenberger, Werner
    Menning, Hans
    Ackermann, Hermann
    NEUROPSYCHOLOGIA, 2007, 45 (06) : 1342 - 1354
  • [37] Increased Connectivity among Sensory and Motor Regions during Visual and Audiovisual Speech Perception
    Peelle, Jonathan E.
    Spehar, Brent
    Jones, Michael S.
    McConkey, Sarah
    Myerson, Joel
    Hale, Sandra
    Sommers, Mitchell S.
    Tye-Murray, Nancy
    JOURNAL OF NEUROSCIENCE, 2022, 42 (03): : 435 - 442
  • [38] Applying Virtual Reality to Audiovisual Speech Perception Tasks in Children
    Salanger, Maeve
    Lewis, Dawna
    Vallier, Timothy
    McDermott, Tessa
    Dergan, Andrew
    AMERICAN JOURNAL OF AUDIOLOGY, 2020, 29 (02) : 244 - 258
  • [39] Electrophysiological correlates of individual differences in perception of audiovisual temporal asynchrony
    Kaganovich, Natalya
    Schumaker, Jennifer
    NEUROPSYCHOLOGIA, 2016, 86 : 119 - 130
  • [40] A possible neurophysiological correlate of audiovisual binding and unbinding in speech perception
    Ganesh, Attigodu C.
    Berthommier, Frederic
    Vilain, Coriandre
    Sato, Marc
    Schwartz, Jean-Luc
    FRONTIERS IN PSYCHOLOGY, 2014, 5