A value-driven McGurk effect: Value-associated faces enhance the influence of visual information on audiovisual speech perception and its eye movement pattern

被引:0
作者
Xiaoxiao Luo
Guanlan Kang
Yu Guo
Xingcheng Yu
Xiaolin Zhou
机构
[1] Peking University,School of Psychological and Cognitive Sciences
[2] Zhejiang Normal University,Institute of Psychological and Brain Sciences
[3] Peking University,Beijing Key Laboratory of Behavior and Mental Health
[4] Shanghai International Studies University,Institute of Linguistics
[5] Peking University,PKU
来源
Attention, Perception, & Psychophysics | 2020年 / 82卷
关键词
McGurk effect; Reward association; Audiovisual speech perception; Eye movements; Signal detection analysis;
D O I
暂无
中图分类号
学科分类号
摘要
This study investigates whether and how value-associated faces affect audiovisual speech perception and its eye movement pattern. Participants were asked to learn to associate particular faces with or without monetary reward in the training phase, and, in the subsequent test phase, to identify syllables that the talkers had said in video clips in which the talkers’ faces had or had not been associated with reward. The syllables were either congruent or incongruent with the talkers’ mouth movements. Crucially, in some cases, the incongruent syllables could elicit the McGurk effect. Results showed that the McGurk effect occurred more often for reward-associated faces than for non-reward-associated faces. Moreover, the signal detection analysis revealed that participants had lower criterion and higher discriminability for reward-associated faces than for non-reward-associated faces. Surprisingly, eye movement data showed that participants spent more time looking at and fixated more often on the extraoral (nose/cheek) area for reward-associated faces than for non-reward-associated faces, while the opposite pattern was observed on the oral (mouth) area. The correlation analysis demonstrated that, over participants, the more they looked at the extraoral area in the training phase because of reward, the larger the increase of McGurk proportion (and the less they looked at the oral area) in the test phase. These findings not only demonstrate that value-associated faces enhance the influence of visual information on audiovisual speech perception but also highlight the importance of the extraoral facial area in the value-driven McGurk effect.
引用
收藏
页码:1928 / 1941
页数:13
相关论文
共 102 条
  • [1] Alsius A(2005)Audiovisual integration of speech falters under high attention demands Current Biology 15 839-843
  • [2] Navarra J(2018)Forty years after hearing lips and seeing voices: The McGurk effect revisited Multisensory Research 31 111-144
  • [3] Campbell R(2013)A value-driven mechanism of attentional selection Journal of Vision 13 103-104
  • [4] Soto-Faraco S(2016)Value-driven attentional capture in the auditory domain Attention, Perception, & Psychophysics 78 242-250
  • [5] Alsius A(2011)Value-driven attentional capture Proceedings of the National Academy of Sciences of the United States of America 108 10367-10371
  • [6] Paré M(1997)The Psychophysics Toolbox Spatial Vision 10 433-436
  • [7] Munhall KG(2003)Visual influences on the internal structure of phonetic categories Perception & Psychophysics 65 591-601
  • [8] Anderson BA(2012)The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual information Seeing & Perceiving 25 87-106
  • [9] Anderson BA(2009)Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses Behavior Research Methods 41 1149-1160
  • [10] Anderson BA(2007)G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences Behavior Research Methods 39 175-191