Online Platforms for Remote Immersive Virtual Reality Testing: An Emerging Tool for Experimental Behavioral Research

被引:4
作者
Loetscher, Tobias [1 ]
Jurkovic, Nadia Siena [1 ]
Michalski, Stefan Carlo [1 ,2 ]
Billinghurst, Mark [3 ,4 ]
Lee, Gun [3 ]
机构
[1] Univ South Australia, Cognit Ageing & Impairment Neurosci Lab, Justice & Soc, Adelaide, SA 5000, Australia
[2] Univ Sydney, Sch Psychol, Sydney, NSW 2006, Australia
[3] Univ South Australia, Australian Res Ctr Interact & Virtual Environm, STEM, Adelaide, SA 5000, Australia
[4] Univ Auckland, Empath Comp Lab, Auckland 1010, New Zealand
关键词
crowdsourcing; virtual reality; online testing; reaction time; Prolific; INHIBITION; RETURN;
D O I
10.3390/mti7030032
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Virtual Reality (VR) technology is gaining in popularity as a research tool for studying human behavior. However, the use of VR technology for remote testing is still an emerging field. This study aimed to evaluate the feasibility of conducting remote VR behavioral experiments that require millisecond timing. Participants were recruited via an online crowdsourcing platform and accessed a task on the classic cognitive phenomenon "Inhibition of Return" through a web browser using their own VR headset or desktop computer (68 participants in each group). The results confirm previous research that remote participants using desktop computers can be used effectively for conducting time-critical cognitive experiments. However, inhibition of return was only partially replicated for the VR headset group. Exploratory analyses revealed that technical factors, such as headset type, were likely to significantly impact variability and must be mitigated to obtain accurate results. This study demonstrates the potential for remote VR testing to broaden the research scope and reach a larger participant population. Crowdsourcing services appear to be an efficient and effective way to recruit participants for remote behavioral testing using high-end VR headsets.
引用
收藏
页数:10
相关论文
共 38 条
[1]   MTurk Research: Review and Recommendations [J].
Aguinis, Herman ;
Villamor, Isabel ;
Ramani, Ravi S. .
JOURNAL OF MANAGEMENT, 2021, 47 (04) :823-837
[2]  
[Anonymous], 2022, Jamovi project jamovi Computer Software Version 2.3.23
[3]   Disturbance and Plausibility in a Virtual Rock Concert: A Pilot Study [J].
Beacco, Alejandro ;
Oliva, Ramon ;
Cabreira, Carlos ;
Gallego, Jaime ;
Slater, Mel .
2021 IEEE VIRTUAL REALITY AND 3D USER INTERFACES (VR), 2021, :538-545
[4]  
BenShachar M., 2020, J OPEN SOURCE SOFTW, V5, P2815, DOI DOI 10.21105/JOSS.02815
[5]   Assessing the Effects of Technical Variance on the Statistical Outcomes of Web Experiments Measuring Response Times [J].
Brand, Andrew ;
Bradley, Michael T. .
SOCIAL SCIENCE COMPUTER REVIEW, 2012, 30 (03) :350-357
[6]   The timing mega-study: comparing a range of experiment generators, both lab-based and online [J].
Bridges, David ;
Pitiot, Alain ;
MacAskill, Michael R. ;
Peirce, Jonathan W. .
PEERJ, 2020, 8
[7]   An MTurk Crisis? Shifts in Data Quality and the Impact on Study Results [J].
Chmielewski, Michael ;
Kucker, Sarah C. .
SOCIAL PSYCHOLOGICAL AND PERSONALITY SCIENCE, 2020, 11 (04) :464-473
[8]   The Past, Present, and Future o f Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature [J].
Cipresso, Pietro ;
Chicchi Giglioli, Irene Alice ;
Alcaniz Raya, Mariano ;
Riva, Giuseppe .
FRONTIERS IN PSYCHOLOGY, 2018, 9
[9]   Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research [J].
Crump, Matthew J. C. ;
McDonnell, John V. ;
Gureckis, Todd M. .
PLOS ONE, 2013, 8 (03)
[10]   Data quality of platforms and panels for online behavioral research [J].
Eyal, Peer ;
David, Rothschild ;
Andrew, Gordon ;
Zak, Evernden ;
Ekaterina, Damer .
BEHAVIOR RESEARCH METHODS, 2022, 54 (04) :1643-1662