Compensating for Distance Compression in Audiovisual Virtual Environments Using Incongruence

被引:33
作者
Finnegan, Daniel J. [1 ]
O'Neill, Eamonn [2 ]
Proulx, Michael J. [3 ]
机构
[1] Univ Bath, Ctr Digital Entertainment, Bath BA2 7AY, Avon, England
[2] Univ Bath, Dept Comp Sci, Bath BA2 7AY, Avon, England
[3] Univ Bath, Dept Psychol, Bath BA2 7AY, Avon, England
来源
34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016 | 2016年
基金
英国工程与自然科学研究理事会;
关键词
Distance perception; spatial audio; head mounted display; virtual environment; binaural audio; incongruent display; MULTISENSORY INTEGRATION; PERCEPTION; REAL; LOCALIZATION; HUMANS; SOUND;
D O I
10.1145/2858036.2858065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A key requirement for a sense of presence in Virtual Environments (VEs) is for a user to perceive space as naturally as possible. One critical aspect is distance perception. When judging distances, compression is a phenomenon where humans tend to underestimate the distance between themselves and target objects (termed egocentric or absolute compression), and between other objects (exocentric or relative compression). Results of studies in virtual worlds rendered through head mounted displays are striking, demonstrating significant distance compression error. Distance compression is a multisensory phenomenon, where both audio and visual stimuli are often compressed with respect to their distances from the observer. In this paper, we propose and test a method for reducing crossmodal distance compression in VEs. We report an empirical evaluation of our method via a study of 3D spatial perception within a virtual reality (VR) head mounted display. Applying our method resulted in more accurate distance perception in a VE at longer range, and suggests a modification that could adaptively compensate for distance compression at both shorter and longer ranges. Our results have a significant and intriguing implication for designers of VEs: an incongruent audiovisual display, i.e. where the audio and visual information is intentionally misaligned, may lead to better spatial perception of a virtual scene.
引用
收藏
页码:200 / 212
页数:13
相关论文
共 47 条
  • [1] Ahrens Jens, 2008, AUD ENG SOC CONV AUD
  • [2] The ventriloquist effect results from near-optimal bimodal integration
    Alais, D
    Burr, D
    [J]. CURRENT BIOLOGY, 2004, 14 (03) : 257 - 262
  • [3] Auditory/visual distance estimation: accuracy and variability
    Anderson, Paul W.
    Zahorik, Pavel
    [J]. FRONTIERS IN PSYCHOLOGY, 2014, 5
  • [4] [Anonymous], 2015, P 33 ANN ACM C HUM F, DOI DOI 10.1145/2702123.2702389
  • [5] [Anonymous], 2008, MEASURING SPATIAL PE
  • [6] Artaega Daniel, 2013, J AUDIO ENG SOC
  • [7] Bayesian integration of visual and auditory signals for spatial localization
    Battaglia, PW
    Jacobs, RA
    Aslin, RN
    [J]. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2003, 20 (07): : 1391 - 1397
  • [8] Do deaf individuals see better?
    Bavelier, Daphne
    Dye, Matthew W. G.
    Hauser, Peter C.
    [J]. TRENDS IN COGNITIVE SCIENCES, 2006, 10 (11) : 512 - 518
  • [9] Covert attention affects the psychometric function of contrast sensitivity
    Cameron, EL
    Tai, JC
    Carrasco, M
    [J]. VISION RESEARCH, 2002, 42 (08) : 949 - 967
  • [10] Evidence for crossmodal interactions across depth on target localisation performance in a spatial array
    Chan, Jason S.
    Maguinness, Corrina
    Lisiecka, Danuta
    Setti, Annalisa
    Newell, Fiona N.
    [J]. PERCEPTION, 2012, 41 (07) : 757 - 773