Teaching Language to Deaf Infants with a Robot and a Virtual Human

被引:25
作者
Scassellati, Brian [1 ]
Brayer, Jake [1 ]
Tsui, Katherine [1 ]
Gilani, Setareh Nasihati [2 ]
Malzkuhn, Melissa [3 ]
Manini, Barbara [3 ]
Stone, Adam [1 ]
Kartheiser, Geo [3 ]
Merla, Arcangelo [4 ]
Shapiro, Ari [2 ]
Traum, David [2 ]
Petitto, Laura-Ann [3 ]
机构
[1] Yale Univ, Dept Comp Sci, POB 2158, New Haven, CT 06520 USA
[2] Univ Southern Calif, Inst Creat Technol, Los Angeles, CA 90007 USA
[3] Gallaudet Univ, PhD Educ Neurosci Program, Washington, DC 20002 USA
[4] Univ G DAnnunzio, Dept Neurosci & Imaging Sci, Chieti, Italy
来源
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018) | 2018年
关键词
Social robots; virtual humans; sign language; assistive technology; language development; SOCIALLY ASSISTIVE ROBOTICS; EYE-GAZE; PERCEPTION; CHILDREN; LEARN; TELEVISION; BENEFITS; AGENTS; MEDIA;
D O I
10.1145/3173574.3174127
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Children with insufficient exposure to language during critical developmental periods in infancy are at risk for cognitive, language, and social deficits [55]. This is especially difficult for deaf infants, as more than 90% are born to hearing parents with little sign language experience [48]. We created an integrated multi-agent system involving a robot and virtual human designed to augment language exposure for 6-12 month old infants. Human-machine design for infants is challenging, as most screen-based media are unlikely to support learning in infants [33]. While presently, robots are incapable of the dexterity and expressiveness required for signing, even if it existed, developmental questions remain about the capacity for language from artificial agents to engage infants. Here we engineered the robot and avatar to provide visual language to effect socially contingent human conversational exchange. We demonstrate the successful engagement of our technology through case studies of deaf and hearing infants.
引用
收藏
页数:13
相关论文
共 79 条
[1]  
Admoni H, 2017, J HUM-ROBOT INTERACT, V6, P25, DOI 10.5898/JHRI.6.1.Admoni
[2]  
Akalin N, 2014, IEEE-RAS INT C HUMAN, P1122, DOI 10.1109/HUMANOIDS.2014.7041509
[3]  
Amos B., 2016, OPENFACE GEN PURPOSE
[4]   Physiological, clinical and psychological applications of dynamic infrared imaging [J].
Anbar, M .
PROCEEDINGS OF THE 25TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY, VOLS 1-4: A NEW BEGINNING FOR HUMAN HEALTH, 2003, 25 :1121-1124
[5]   Television and very young children [J].
Anderson, DR ;
Pempek, TA .
AMERICAN BEHAVIORAL SCIENTIST, 2005, 48 (05) :505-522
[6]  
[Anonymous], 2006, ROMAN 2006 15 IEEE I, DOI [DOI 10.1109/ROMAN.2006.314404, 10.1109/ROMAN.2006.314404]
[7]  
[Anonymous], 2015, NATL I DEAFNESS OTHE
[8]  
[Anonymous], P COGN SCI SOC
[9]   Can we talk to robots? Ten-month-old infants expected interactive humanoid robots to be talked to by persons [J].
Arita, A ;
Hiraki, K ;
Kanda, T ;
Ishiguro, H .
COGNITION, 2005, 95 (03) :B49-B57
[10]   The Benefits of Interactions with Physically Present Robots over Video-Displayed Agents [J].
Bainbridge, Wilma A. ;
Hart, Justin W. ;
Kim, Elizabeth S. ;
Scassellati, Brian .
INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2011, 3 (01) :41-52