Teaching Language to Deaf Infants with a Robot and a Virtual Human

被引:26
作者
Scassellati, Brian [1 ]
Brayer, Jake [1 ]
Tsui, Katherine [1 ]
Gilani, Setareh Nasihati [2 ]
Malzkuhn, Melissa [3 ]
Manini, Barbara [3 ]
Stone, Adam [1 ]
Kartheiser, Geo [3 ]
Merla, Arcangelo [4 ]
Shapiro, Ari [2 ]
Traum, David [2 ]
Petitto, Laura-Ann [3 ]
机构
[1] Yale Univ, Dept Comp Sci, POB 2158, New Haven, CT 06520 USA
[2] Univ Southern Calif, Inst Creat Technol, Los Angeles, CA 90007 USA
[3] Gallaudet Univ, PhD Educ Neurosci Program, Washington, DC 20002 USA
[4] Univ G DAnnunzio, Dept Neurosci & Imaging Sci, Chieti, Italy
来源
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018) | 2018年
关键词
Social robots; virtual humans; sign language; assistive technology; language development; SOCIALLY ASSISTIVE ROBOTICS; EYE-GAZE; PERCEPTION; CHILDREN; LEARN; TELEVISION; BENEFITS; AGENTS; MEDIA;
D O I
10.1145/3173574.3174127
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Children with insufficient exposure to language during critical developmental periods in infancy are at risk for cognitive, language, and social deficits [55]. This is especially difficult for deaf infants, as more than 90% are born to hearing parents with little sign language experience [48]. We created an integrated multi-agent system involving a robot and virtual human designed to augment language exposure for 6-12 month old infants. Human-machine design for infants is challenging, as most screen-based media are unlikely to support learning in infants [33]. While presently, robots are incapable of the dexterity and expressiveness required for signing, even if it existed, developmental questions remain about the capacity for language from artificial agents to engage infants. Here we engineered the robot and avatar to provide visual language to effect socially contingent human conversational exchange. We demonstrate the successful engagement of our technology through case studies of deaf and hearing infants.
引用
收藏
页数:13
相关论文
共 79 条
[21]   AutoTutor: An intelligent tutoring system with mixed-initiative dialogue [J].
Graesser, AC ;
Chipman, P ;
Haynes, BC ;
Olney, A .
IEEE TRANSACTIONS ON EDUCATION, 2005, 48 (04) :612-618
[22]  
Greczek J., 2015, AAAI Fall Symposium-Technical Report, VFS-15-01, P74
[23]   Brain responses reveal young infants' sensitivity to when a social partner follows their gaze [J].
Grossmann, Tobias ;
Lloyd-Fox, Sarah ;
Johnson, Mark H. .
DEVELOPMENTAL COGNITIVE NEUROSCIENCE, 2013, 6 :155-161
[24]   The Autonomic Signature of Guilt in Children: A Thermal Infrared Imaging Study [J].
Ioannou, Stephanos ;
Ebisch, Sjoerd ;
Aureli, Tiziana ;
Bafunno, Daniela ;
Ioannides, Helene Alexi ;
Cardone, Daniela ;
Manini, Barbara ;
Romani, Gian Luca ;
Gallese, Vittorio ;
Merla, Arcangelo .
PLOS ONE, 2013, 8 (11)
[25]   A Review on 3D Signing Avatars: Benefits, Uses and Challenges [J].
Jaballah, Kabil ;
Jemni, Mohamed .
INTERNATIONAL JOURNAL OF MULTIMEDIA DATA ENGINEERING & MANAGEMENT, 2013, 4 (01) :21-45
[26]  
Johnson WL, 2004, LECT NOTES COMPUT SC, V3220, P336
[27]  
Kacorri Hernisa, 2013, Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. 7th International Conference, UAHCI 2013 Held as Part of HCI International 2013. Proceedings. LNCS 8009, P510, DOI 10.1007/978-3-642-39188-0_55
[28]  
Kehoe C, 2004, ICLS2004: INTERNATIONAL CONFERENCE OF THE LEARNING SCIENCES, PROCEEDINGS, P613
[29]   Bridging the Research Gap: Making HRI Useful to Individuals with Autism [J].
Kim, Elizabeth S. ;
Paul, Rhea ;
Shic, Frederick ;
Scassellati, Brian .
JOURNAL OF HUMAN-ROBOT INTERACTION, 2012, 1 (01) :26-54
[30]  
Kipp Michael, 2011, Intelligent Virtual Agents. Proceedings 11th International Conference, IVA 2011, P113, DOI 10.1007/978-3-642-23974-8_13