Detection and Attention for Auditory, Visual, and Audiovisual Speech in Children with Hearing Loss

被引:6
作者
Jerger, Susan [1 ,2 ]
Damian, Markus F. [3 ]
Karl, Cassandra [1 ,2 ]
Abdi, Herve [1 ]
机构
[1] Univ Texas Dallas, Sch Behav Brain Sci, GR4-1,800 W Campbell Rd, Richardson, TX 75080 USA
[2] Univ Texas Dallas, Callier Ctr Commun Disorders, Richardson, TX 75083 USA
[3] Univ Bristol, Sch Psychol Sci, Bristol, Avon, England
关键词
Attention; Audiovisual speech; Children; Hearing loss; Lipreading; Multisensory speech; Speech detection; Visual speech; RESPONSE-TIME DISTRIBUTIONS; MULTISENSORY INTEGRATION; SUSTAINED ATTENTION; STIMULUS-INTENSITY; MOVING FACES; PERCEPTION; DISCRIMINATION; IDENTIFICATION; LANGUAGE; PERFORMANCE;
D O I
10.1097/AUD.0000000000000798
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Objectives: Efficient multisensory speech detection is critical for children who must quickly detect/encode a rapid stream of speech to participate in conversations and have access to the audiovisual cues that underpin speech and language development, yet multisensory speech detection remains understudied in children with hearing loss (CHL). This research assessed detection, along with vigilant/goal-directed attention, for multisensory versus unisensory speech in CHL versus children with normal hearing (CNH). Design: Participants were 60 CHL who used hearing aids and communicated successfully aurally/orally and 60 age-matched CNH. Simple response times determined how quickly children could detect a preidentified easy-to-hear stimulus (70 dB SPL, utterance "buh" presented in auditory only [A], visual only [V], or audiovisual [AV] modes). The V mode formed two facial conditions: static versus dynamic face. Faster detection for multisensory (AV) than unisensory (A or V) input indicates multisensory facilitation. We assessed mean responses and faster versus slower responses (defined by first versus third quartiles of response-time distributions), which were respectively conceptualized as: faster responses (first quartile) reflect efficient detection with efficient vigilant/goal-directed attention and slower responses (third quartile) reflect less efficient detection associated with attentional lapses. Finally, we studied associations between these results and personal characteristics of CHL. Results: Unisensory A versus V modes: Both groups showed better detection and attention for A than V input. The A input more readily captured children's attention and minimized attentional lapses, which supports A-bound processing even by CHL who were processing low fidelity A input. CNH and CHL did not differ in ability to detect A input at conversational speech level. Multisensory AV versus A modes: Both groups showed better detection and attention for AV than A input. The advantage for AV input was facial effect (both static and dynamic faces), a pattern suggesting that communication is a social interaction that is more than just words. Attention did not differ between groups; detection was faster in CHL than CNH for AV input, but not for A input. Associations between personal characteristics/degree of hearing loss of CHL and results: CHL with greatest deficits in detection of V input had poorest word recognition skills and CHL with greatest reduction of attentional lapses from AV input had poorest vocabulary skills. Both outcomes are consistent with the idea that CHL who are processing low fidelity A input depend disproportionately on V and AV input to learn to identify words and associate them with concepts. As CHL aged, attention to V input improved. Degree of HL did not influence results. Conclusions: Understanding speech-a daily challenge for CHL-is a complex task that demands efficient detection of and attention to AV speech cues. Our results support the clinical importance of multisensory approaches to understand and advance spoken communication by CHL.
引用
收藏
页码:508 / 520
页数:13
相关论文
共 81 条
[1]  
Abdi H., 2009, Experimental design and analysis for Psychology
[2]  
Abdi H., 2007, ENCY MEASUREMENT STA, V3, DOI DOI 10.4135/9781412952644
[3]  
Alves Nelson Torro, 2013, Estud. psicol. (Natal), V18, P125
[4]  
American National Standards Institute (ANSI), 2010, S362010 ANSI ASA
[5]   Beyond mean response latency: Response time distributional analyses of semantic priming [J].
Balota, David A. ;
Yap, Melvin J. ;
Cortese, Michael J. ;
Watson, Jason M. .
JOURNAL OF MEMORY AND LANGUAGE, 2008, 59 (04) :495-523
[6]  
Beery K. E., 2010, The Beery-Buktenica Developmental Test of Visual-motor Integration (Beery VMI): With supplemental developmental tests of visual perception and motor coordination and stepping stones age norms from birth to age six: administration, scoring, and teaching manual
[7]   Effects of congenital hearing loss and cochlear implantation on audiovisual speech perception in infants and children [J].
Bergeson, Tonya R. ;
Houston, Derek M. ;
Miyamoto, Richard T. .
RESTORATIVE NEUROLOGY AND NEUROSCIENCE, 2010, 28 (02) :157-165
[8]   Auditory speech detection in noise enhanced by lipreading [J].
Bernstein, LE ;
Auer, ET ;
Takayanagi, S .
SPEECH COMMUNICATION, 2004, 44 (1-4) :5-18
[9]   The development of sustained attention in children: The effect of age and task load [J].
Betts, Jennifer ;
Mckay, Jenny ;
Maruff, Paul ;
Anderson, Vicki .
CHILD NEUROPSYCHOLOGY, 2006, 12 (03) :205-221
[10]   PROCESSING REDUNDANT INFORMATION [J].
BIEDERMA.I ;
CHECKOSK.SF .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1970, 83 (03) :486-&