Time to compile: A performance installation as human-robot interaction study examining self-evaluation and perceived control

被引:9
作者
Cuan C. [1 ,2 ]
Berl E. [2 ]
Laviers A. [2 ]
机构
[1] Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, IL
[2] Department of Mechanical Engineering, Stanford University, Stanford, CA
来源
Paladyn | 2019年 / 10卷 / 01期
关键词
embodied learning; human robot interaction; robot performance; virtual reality;
D O I
10.1515/pjbr-2019-0024
中图分类号
学科分类号
摘要
Embodied art installations embed interactive elements within theatrical contexts and allow participating audience members to experience art in an active, kinesthetic manner. These experiences can exemplify, probe, or question how humans think about objects, each other, and themselves. This paper presents work using installations to explore human perceptions of robot and human capabilities. The paper documents an installation, developed over several months and activated at distinct venues, where user studies were conducted in parallel to a robotic art installation. A set of best practices for successful collection of data over the course of these trials is developed. Results of the studies are presented, giving insight into human opinions of a variety of natural and artificial systems. In particular, after experiencing the art installation, participants were more likely to attribute action of distinct system elements to non-human entities. Post treatment survey responses revealed a direct relationship between predicted difficulty and perceived success. Qualitative responses give insight into viewers' experiences watching human performers alongside technologies. This work lays a framework for measuring human perceptions of humanoid systems - and factors that influence the perception of whether a natural or artificial agent is controlling a given movement behavior - inside robotic art installations. © 2019 Catie Cuan et al., published by De Gruyter 2019.
引用
收藏
页码:267 / 285
页数:18
相关论文
共 64 条
  • [21] Pelachaud C., Studies on gesture expressivity for a virtual agent, Speech Communication, 51, 7, pp. 630-639, (2009)
  • [22] Flemisch T., Viergutz A., Dachselt R., Easy authoring of variable gestural expressions for a humanoid robot, Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, ACM, pp. 328-328, (2014)
  • [23] Knight H., Simmons R., Expressive motion with x y and theta: Laban effort features for mobile robots, The 23rd IEEE International Symposium on Robot and Human Interactive Communication IEEE, pp. 267-273, (2014)
  • [24] Smith A., Anderson M., Automation in Everyday Life, (2017)
  • [25] Ledbetter S., America's Top Fears 2015, (2015)
  • [26] Bryson J.J., The Past Decade and Future of AI's Impact on Society, (2019)
  • [27] Goldberg K., Siegwart R., Beyond Webcams: An Introduction to Online Robots, (2002)
  • [28] Miller S., Van Den Berg J., Fritz M., Darrell T., Goldberg K., Abbeel P., A geometric approach to robotic laundry folding, The International Journal of Robotics Research, 31, 2, pp. 249-267, (2012)
  • [29] Burton S.J., Samadani A.-A., Gorbet R., Kuli D., Laban movement analysis and affective movement generation for robots and other near-living creatures, Dance Notations and Robot Motion, Springer, pp. 25-48, (2016)
  • [30] Gemeinboeck P., Saunders R., Creative machine performance: Computational creativity and robotic art, Proceedings of the Fourth International Conference on Computational Creativity (ICCC, 2013, pp. 215-219, (2013)