No "Self" Advantage for Audiovisual Speech Aftereffects

被引:0
作者
Modelska, Maria [1 ]
Pourquie, Marie [1 ,2 ]
Baart, Martijn [1 ,3 ]
机构
[1] BCBL Basque Ctr Cognit Brain & Language, Donostia San Sebastian, Spain
[2] UPPA, IKER UMR5478, Bayonne, France
[3] Tilburg Univ, Dept Cognit Neuropsychol, Tilburg, Netherlands
关键词
speech perception; self-advantage; recalibration; adaptation; lip-reading; SELECTIVE ADAPTATION; VISUAL SPEECH; ELECTROPHYSIOLOGICAL EVIDENCE; PHONETIC RECALIBRATION; AUDITORY SPEECH; HEARING-LIPS; PERCEPTION; IDENTIFICATION; INFORMATION; LISTENERS;
D O I
10.3389/fpsyg.2019.00658
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory "self" advantages. We assessed whether there is a "self" advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a "self" advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.
引用
收藏
页数:10
相关论文
共 50 条
[41]   Audiovisual speech perception: Moving beyond McGurk [J].
Van Engen, Kristin J. J. ;
Dey, Avanti ;
Sommers, Mitchell S. S. ;
Peelle, Jonathan E. E. .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2022, 152 (06) :3216-3225
[42]   Audiovisual Binding for Speech Perception in Noise and in Aging [J].
Ganesh, Attigodu Chandrashekara ;
Berthommier, Frederic ;
Schwartz, Jean-Luc .
LANGUAGE LEARNING, 2018, 68 :193-220
[43]   A measure for assessing the effects of audiovisual speech integration [J].
Altieri, Nicholas ;
Townsend, James T. ;
Wenger, Michael J. .
BEHAVIOR RESEARCH METHODS, 2014, 46 (02) :406-415
[44]   Neural processing of asynchronous audiovisual speech perception [J].
Stevenson, Ryan A. ;
Altieri, Nicholas A. ;
Kim, Sunah ;
Pisoni, David B. ;
James, Thomas W. .
NEUROIMAGE, 2010, 49 (04) :3308-3318
[45]   Spatial frequency requirements for audiovisual speech perception [J].
K. G. Munhall ;
C. Kroos ;
G. Jozan ;
E. Vatikiotis-Bateson .
Perception & Psychophysics, 2004, 66 :574-583
[46]   Reduced efficiency of audiovisual integration for nonnative speech [J].
Yi, Han-Gyol ;
Phelps, Jasmine E. B. ;
Smiljanic, Rajka ;
Chandrasekaran, Bharath .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2013, 134 (05) :EL387-EL393
[47]   Neurophysiological Indices of Audiovisual Speech Processing Reveal a Hierarchy of Multisensory Integration Effects [J].
O'Sullivan, Aisling E. ;
Crosse, Michael J. ;
Di Liberto, Giovanni M. ;
de Cheveigne, Alain ;
Lalor, Edmund C. .
JOURNAL OF NEUROSCIENCE, 2021, 41 (23) :4991-5003
[48]   Regularized models of audiovisual integration of speech with predictive power for sparse behavioral data [J].
Andersen, Tobias S. ;
Winther, Ole .
JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2020, 98
[49]   Can you McGurk yourself? Self-face and self-voice in audiovisual speech [J].
Christopher Aruffo ;
David I. Shore .
Psychonomic Bulletin & Review, 2012, 19 :66-72
[50]   Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex [J].
Rhone, Ariane E. ;
Nourski, Kirill V. ;
Oya, Hiroyuki ;
Kawasaki, Hiroto ;
Howard, Matthew A., III ;
McMurray, Bob .
LANGUAGE COGNITION AND NEUROSCIENCE, 2016, 31 (02) :284-302