Children Flexibly Seek Visual Information to Support Signed and Spoken Language Comprehension

被引:11
作者
MacDonald, Kyle [1 ,2 ]
Marchman, Virginia A. [1 ]
Fernald, Anne [1 ]
Frank, Michael C. [1 ]
机构
[1] Stanford Univ, Dept Psychol, Stanford, CA 94305 USA
[2] Univ Calif Los Angeles, Dept Commun, 2225 Rolfe Hall, Los Angeles, CA 90095 USA
关键词
eye movements; grounded language comprehension; information-seeking; speech in background noise; American Sign Language; EYE-MOVEMENTS; WORD RECOGNITION; TIME-COURSE; SPEECH; ATTENTION; MODEL; INPUT; DEAF; INTEGRATION; TRACKING;
D O I
10.1037/xge0000702
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite uncertainty in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information for the goal of language comprehension. We present 2 studies of eye movements during real-time language processing, where the value of fixating on a social partner varies across different contexts. First, compared with children learning spoken English (n = 80), young American Sign Language (ASL) learners (n = 30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result provides evidence that ASL learners adapt their gaze to effectively divide attention between language and referents, which both compete for processing via the visual channel. Second, English-speaking preschoolers (n = 39) and adults (n = 31) fixated longer on a speaker's face while processing language in a noisy auditory environment. Critically, like the ASL learners in Experiment 1, this delay resulted in gathering more visual information and a higher proportion of language-consistent gaze shifts. Taken together, these studies suggest that young listeners can adapt their gaze to seek visual information from social partners to support real-time language comprehension.
引用
收藏
页码:1078 / 1096
页数:19
相关论文
共 60 条
  • [11] Picking up speed in understanding: Speech processing efficiency and vocabulary growth across the 2nd year
    Fernald, A
    Perfors, A
    Marchman, VA
    [J]. DEVELOPMENTAL PSYCHOLOGY, 2006, 42 (01) : 98 - 116
  • [12] Fernald A, 2008, DEVELOPMENTAL PSYCHOLINGUISTICS: ON-LINE METHODS IN CHILDREN'S LANGUAGE PROCESSING, P97
  • [13] The effect of seeing the interlocutor on auditory and visual speech production in noise
    Fitzpatrick, Michael
    Kim, Jeesun
    Davis, Chris
    [J]. SPEECH COMMUNICATION, 2015, 74 : 37 - 51
  • [14] Fourtassi A., 2017, Proceedings of the 39th annual conference of the cognitive science society, P373
  • [15] Head-Mounted Eye Tracking: A New Method to Describe Infant Looking
    Franchak, John M.
    Kretch, Kari S.
    Soska, Kasey C.
    Adolph, Karen E.
    [J]. CHILD DEVELOPMENT, 2011, 82 (06) : 1738 - 1750
  • [16] Gabry J., 2016, RSTANARM BAYESIAN AP
  • [17] Rational integration of noisy evidence and prior semantic expectations in sentence interpretation
    Gibson, Edward
    Bergen, Leon
    Piantadosi, Steven T.
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2013, 110 (20) : 8051 - 8056
  • [18] Representation of a perceptual decision in developing oculomotor commands
    Gold, JI
    Shadlen, MN
    [J]. NATURE, 2000, 404 (6776) : 390 - 394
  • [19] Lip Movement Exaggerations During Infant-Directed Speech
    Green, Jordan R.
    Nip, Ignatius S. B.
    Wilson, Erin M.
    Mefferd, Antje S.
    Yunusova, Yana
    [J]. JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2010, 53 (06): : 1529 - 1542
  • [20] Harris M, 1997, J Deaf Stud Deaf Educ, V2, P95