No "Self" Advantage for Audiovisual Speech Aftereffects

被引:0
作者
Modelska, Maria [1 ]
Pourquie, Marie [1 ,2 ]
Baart, Martijn [1 ,3 ]
机构
[1] BCBL Basque Ctr Cognit Brain & Language, Donostia San Sebastian, Spain
[2] UPPA, IKER UMR5478, Bayonne, France
[3] Tilburg Univ, Dept Cognit Neuropsychol, Tilburg, Netherlands
来源
FRONTIERS IN PSYCHOLOGY | 2019年 / 10卷
关键词
speech perception; self-advantage; recalibration; adaptation; lip-reading; SELECTIVE ADAPTATION; VISUAL SPEECH; ELECTROPHYSIOLOGICAL EVIDENCE; PHONETIC RECALIBRATION; AUDITORY SPEECH; HEARING-LIPS; PERCEPTION; IDENTIFICATION; INFORMATION; LISTENERS;
D O I
10.3389/fpsyg.2019.00658
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory "self" advantages. We assessed whether there is a "self" advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a "self" advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.
引用
收藏
页数:10
相关论文
共 50 条
[21]   Audiovisual speech synthesis: An overview of the state-of-the-art [J].
Mattheyses, Wesley ;
Verhelst, Werner .
SPEECH COMMUNICATION, 2015, 66 :182-217
[22]   Hierarchically nested networks optimize the analysis of audiovisual speech [J].
Chalas, Nikos ;
Omigie, Diana ;
Poeppel, David ;
van Wassenhove, Virginie .
ISCIENCE, 2023, 26 (03)
[23]   Audiovisual speech asynchrony asymmetrically modulates neural binding [J].
Sato, Marc .
NEUROPSYCHOLOGIA, 2024, 198
[24]   Speech-specific audiovisual integration modulates induced theta-band oscillations [J].
Lindborg, Alma ;
Baart, Martijn ;
Stekelenburg, Jeroen J. ;
Vroomen, Jean ;
Andersen, Tobias S. .
PLOS ONE, 2019, 14 (07)
[25]   Audiovisual integration of speech in a patient with Broca's Aphasia [J].
Andersen, Tobias S. ;
Starrfelt, Randi .
FRONTIERS IN PSYCHOLOGY, 2015, 6
[26]   Visemic processing in audiovisual discrimination of natural speech: A simultaneous fMRI-EEG study [J].
Dubois, Cyril ;
Otzenberger, Helene ;
Gounot, Daniel ;
Sock, Rudolph ;
Metz-Lutz, Marie-Noelle .
NEUROPSYCHOLOGIA, 2012, 50 (07) :1316-1326
[27]   INVOLVEMENT OF SUPERIOR TEMPORAL AREAS IN AUDIOVISUAL AND AUDIOMOTOR SPEECH INTEGRATION [J].
Komeilipoor, N. ;
Cesari, P. ;
Daffertshofer, A. .
NEUROSCIENCE, 2017, 343 :276-283
[28]   Effects of age and left hemisphere lesions on audiovisual integration of speech [J].
Michaelis, Kelly ;
Erickson, Laura C. ;
Fama, Mackenzie E. ;
Skipper-Kallal, Laura M. ;
Xing, Shihui ;
Lacey, Elizabeth H. ;
Anbari, Zainab ;
Norato, Gina ;
Rauschecker, Josef P. ;
Turkeltaub, Peter E. .
BRAIN AND LANGUAGE, 2020, 206
[29]   Causal inference and temporal predictions in audiovisual perception of speech and music [J].
Noppeney, Uta ;
Lee, Hwee Ling .
ANNALS OF THE NEW YORK ACADEMY OF SCIENCES, 2018, 1423 (01) :102-116
[30]   Temporal synchrony and audiovisual integration of speech and object stimuli in autism [J].
Smith, Elizabeth ;
Zhang, Shouling ;
Bennetto, Loisa .
RESEARCH IN AUTISM SPECTRUM DISORDERS, 2017, 39 :11-19