Content Is King: Impact of Task Design for Eliciting Participant Agreement in Crowdsourcing for HRI

被引:3
作者
Bevins, Alisha [1 ]
McPhaul, Nina [2 ]
Duncan, Brittany A. [1 ]
机构
[1] Univ Nebraska, Lincoln, NE 68588 USA
[2] Howard Univ, Washington, DC 20059 USA
来源
SOCIAL ROBOTICS, ICSR 2020 | 2020年 / 12483卷
关键词
Crowdsourced; Gesture; Aerial vehicle;
D O I
10.1007/978-3-030-62056-1_53
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work investigates how the design of crowdsourced tasks can influence responses. As a formative line of inquiry, this study sought to understand how users would respond either through movement, response, or shift of focus to varying flight paths from a drone. When designing an experiment, running several proto-studies can help with generating a dataset that is actionable, but it has been unclear how differences in things such as phrasing or pre- and post-surveys can impact the results. Leveraging methods from psychology, computer-supported cooperative work, and the human-robot interaction communities this work explored the best practices and lessons learned for crowd-sourcing to reduce time to actionable data for defining new communication paradigms. The lessons learned in this work will be applicable broadly within the human-robot interaction community, even outside those who are interested in defining flight paths, because they provide a scaffold on which to build future experiments seeking to communicate using non-anthropomorphic robots. Important results and recommendations include: increased negative affect with increased question quantity, completion time being relatively consistent based on total number of responses rather than number of videos, responses being more related to the video than the question, and necessity of varying question lengths to maintain engagement.
引用
收藏
页码:640 / 651
页数:12
相关论文
共 16 条
[1]   Does the Design of a Robot Influence Its Animacy and Perceived Intelligence? [J].
Bartneck, Christoph ;
Kanda, Takayuki ;
Mubin, Omar ;
Al Mahmud, Abdullah .
INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2009, 1 (02) :195-204
[2]  
Brosnan K., 2019, INT J MARKET RES, V1470785318
[3]   I show you how I like you -: Can you read it in my face? [J].
Cañamero, L ;
Fredslund, J .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, 2001, 31 (05) :454-459
[4]   Separate but equal? A comparison of participants and data gathered via Amazon's MTurk, social media, and face-to-face behavioral testing [J].
Casler, Krista ;
Bickel, Lydia ;
Hackett, Elizabeth .
COMPUTERS IN HUMAN BEHAVIOR, 2013, 29 (06) :2156-2160
[5]   Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures [J].
Chaminade, Thierry ;
Zecca, Massimiliano ;
Blakemore, Sarah-Jayne ;
Takanishi, Atsuo ;
Frith, Chris D. ;
Micera, Silvestro ;
Dario, Paolo ;
Rizzolatti, Giacomo ;
Gallese, Vittorio ;
Umilta, Maria Alessandra .
PLOS ONE, 2010, 5 (07)
[6]  
Christensen L.B., 2011, Research methods, design, and analysis
[7]  
Duncan BA, 2018, IEEE INT CONF ROBOT, P602
[8]  
Firestone JW, 2019, ACMIEEE INT CONF HUM, P163, DOI [10.1109/hri.2019.8673010, 10.1109/HRI.2019.8673010]
[9]   Clarity is a Worthwhile Quality - On the Role of Task Clarity in Microtask Crowdsourcing [J].
Gadiraju, Ujwal ;
Yang, Jie ;
Bozzon, Alessandro .
PROCEEDINGS OF THE 28TH ACM CONFERENCE ON HYPERTEXT AND SOCIAL MEDIA (HT'17), 2017, :5-14
[10]   Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys [J].
Gadiraju, Ujwal ;
Kawase, Ricardo ;
Dietze, Stefan ;
Demartini, Gianluca .
CHI 2015: PROCEEDINGS OF THE 33RD ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2015, :1631-1640