共 6 条
The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration
被引:16
|作者:
Smith, Alastair C.
[1
]
Monaghan, Padraic
[2
]
Huettig, Falk
[1
,3
]
机构:
[1] Max Planck Inst Psycholinguist, POB 310, NL-6500 AH Nijmegen, Netherlands
[2] Univ Lancaster, Dept Psychol, Lancaster, England
[3] Radboud Univ Nijmegen, Donders Inst Brain Cognit & Behav, Nijmegen, Netherlands
关键词:
Visual world paradigm;
Visual attention;
Spoken word recognition;
Connectionist modelling;
Multimodal processing;
EYE-MOVEMENTS;
INTERACTIVE PROCESSES;
SPEECH-PERCEPTION;
TRACE MODEL;
TIME-COURSE;
LANGUAGE;
ACTIVATION;
RECOGNITION;
INFORMATION;
AMBIGUITY;
D O I:
10.1016/j.jml.2016.08.005
中图分类号:
H0 [语言学];
学科分类号:
030303 ;
0501 ;
050102 ;
摘要:
Ambiguity in natural language is ubiquitous, yet spoken communication is effective due to integration of information carried in the speech signal with information available in the surrounding multimodal landscape. Language mediated visual attention requires visual and linguistic information integration and has thus been used to examine properties of the architecture supporting multimodal processing during spoken language comprehension. In this paper we test predictions generated by alternative models of this multimodal system. A model (TRACE) in which multimodal information is combined at the point of the lexical representations of words generated predictions of a stronger effect of phonological rhyme relative to semantic and visual information on gaze behaviour, whereas a model in which sub-lexical information can interact across modalities (MIM) predicted a greater influence of visual and semantic information, compared to phonological rhyme. Two visual world experiments designed to test these predictions offer support for sub-lexical multi modal interaction during online language processing. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:276 / 303
页数:28
相关论文