Face-to-Face Interaction with Pedagogical Agents, Twenty Years Later

被引:123
作者
Johnson W.L. [1 ]
Lester J.C. [2 ]
机构
[1] Alelo Inc., Los Angeles, CA
[2] North Carolina State University, Raleigh, NC
基金
美国国家科学基金会;
关键词
Game-based learning; Pedagogical agents; Robotics; Teachable agents; Virtual coaches; Virtual environments; Virtual tutors;
D O I
10.1007/s40593-015-0065-9
中图分类号
学科分类号
摘要
Johnson et al. (International Journal of Artificial Intelligence in Education, 11, 47-78, 2000) introduced and surveyed a new paradigm for interactive learning environments: animated pedagogical agents. The article argued for combining animated interface agent technologies with intelligent learning environments, yielding intelligent systems that can interact with learners in natural, human-like ways to achieve better learning outcomes. We outlined a variety of possible uses for pedagogical agents. But we offered only preliminary evidence that they improve learning, leaving that to future research and development. Twenty years have elapsed since work began on animated pedagogical agents. This article re-examines the concepts and predictions in the 2000 article in the context of the current state of the field. Some of the ideas in the paper have become well established and widely adopted, especially in game-based learning environments. Others are only now being realized, thanks to advances in immersive interfaces and robotics that enable rich face-to-face interaction between learners and agents. Research has confirmed that pedagogical agents can be beneficial, but not equally for all learning problems, applications, and learner populations. Although there is a growing body of research findings about pedagogical agents, many questions remain and much work remains to be done. © 2015 International Artificial Intelligence in Education Society.
引用
收藏
页码:25 / 36
页数:11
相关论文
共 38 条
[1]  
Arroyo I., Cooper D.G., Burleson W., Woolf B.P., Muldner K., Christopherson R.M., Emotion Sensors Go to School, 14th International Conference on Artificial Intelligence in Education, pp. 17-24, (2009)
[2]  
Artstein R., Traum D., Alexander O., Leuski A., Jones A., Georgila K., Debevec P., Swartout W., Maio H., Smith S., Time-offset interaction with a Holocaust survivor, IUI ′14: Proceedings of the 19th International Conference on Intelligent User Interfaces, pp. 163-168, (2014)
[3]  
Baylor A.L., Ryu J., Shen E., The effects of pedagogical agent voice and animation on learning, motivation, and perceived persona, Annual World Conference of Educational Multimedia, Hypermedia, & Telecommunication, (2003)
[4]  
Biswas G., Leelawong K., Schwartz D., Vye N., Learning by teaching: A new agent paradigm for educational software, Applied Artificial Intelligence, 19, 3-4, pp. 363-392, (2005)
[5]  
Chi M., Bassok M., Lewis M., Reimann P., Glaser R., Self?explanations: How students study and use examples in learning to solve problems, Cognitive Science, 13, 2, pp. 145-182, (1989)
[6]  
D'Mello S.K., Graesser A.C., Multimodal semi-automated affect detection from conversational cues, gross body language, and facial features, User Modeling and User-Adapted Interaction, 20, 2, pp. 147-187, (2010)
[7]  
Grafsgaard J., Wiggins J., Boyer K., Wiebe E., James L., Predicting Learning and Affect from Multimodal Data Streams in Task-Oriented Tutorial Dialogue, Proceedings of the Seventh International Conference on Educational Data Mining, pp. 122-129, (2014)
[8]  
Hayes-Roth B., Doyle P., Animate characters, Autonomous Agents and Multi-Agent Systems, 1, 2, pp. 195-230, (1998)
[9]  
Johnson W.L., Serious use of a serious game for language learning, International Journal of Artificial Intelligence in Education, 20, 2, pp. 175-195, (2010)
[10]  
Johnson W.L., Wang N., The Role of Politeness in Interactive Educational Software, Human-Computer Etiquette, pp. 91-113, (2010)