Deep into visual saliency for immersive VR environments rendered in real-time

被引:10
作者
Celikcan, Ufuk [1 ]
Askin, Mehmet Bahadir [1 ]
Albayrak, Dilara [2 ]
Capin, Tolga K. [2 ]
机构
[1] Hacettepe Univ, Dept Comp Engn, Ankara, Turkey
[2] TED Univ, Dept Comp Engn, Ankara, Turkey
来源
COMPUTERS & GRAPHICS-UK | 2020年 / 88卷
关键词
Visual saliency; Virtual reality; Stereographics; COMPUTATIONAL MODEL; BOTTOM-UP; ATTENTION; COMPONENTS; MAP;
D O I
10.1016/j.cag.2020.03.006
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
As virtual reality (VR) headsets with head-mounted-displays (HMDs) are becoming more and more prevalent, new research questions are arising. One of the emergent questions is how best to employ visual saliency prediction in VR applications using current line of advanced HMDs. Due to the complex nature of human visual attention mechanism, the problem needs to be investigated from different points of view using different approaches. Having such an outlook, this work extends the previous effort on exploring a set of well-studied visual saliency cues and saliency prediction methods making use of these cues with the aim of assessing how applicable they are for estimating visual saliency in immersive VR environments that are rendered in real-time and experienced with consumer HMDs. To that end, a new user study was conducted with a larger sample and reveals the effects of experiencing dynamic computer-generated scenes with reduced navigation speeds on visual saliency. Using these scenes that have varying visual experiences in terms of contents and range of depth-of-field, the study also compares VR viewing to 2D desktop viewing with an expanded set of results. The presented evaluation offers the most in-depth view of visual saliency in immersive, real-time rendered VR to date. The analysis encompassing the results of both studies indicate that decreasing navigation speed reduces the contribution of depth-cue to visual saliency and has a boosting effect for cues based on 2D image features only. While there are content-dependent variances among their scores, it is seen that the saliency prediction methods based on boundary connectivity and surroundedness work best in general for the given settings. (C) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页码:70 / 82
页数:13
相关论文
共 51 条
[41]   FEATURE-INTEGRATION THEORY OF ATTENTION [J].
TREISMAN, AM ;
GELADE, G .
COGNITIVE PSYCHOLOGY, 1980, 12 (01) :97-136
[42]  
Tsai P-C, 2019, IEEE T CIRCUITS SYST, P1
[43]  
Vaquero DA, 2018, METHOD APPARATUS DET
[44]   Rapid object detection using a boosted cascade of simple features [J].
Viola, P ;
Jones, M .
2001 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2001, :511-518
[45]   Computational Model of Stereoscopic 3D Visual Saliency [J].
Wang, Junle ;
Da Silva, Matthieu Perreira ;
Le Callet, Patrick ;
Ricordel, Vincent .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (06) :2151-2165
[46]  
Yildirim G, 2018, P AS C COMP VIS
[47]   Saliency-Guided Stereo Camera Control for Comfortable VR Explorations [J].
Yoon, Yeo-Jin ;
No, Jaechun ;
Choi, Soo-Mi .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2017, E100D (09) :2245-2248
[48]   A Framework to Evaluate Omnidirectional Video Coding Schemes [J].
Yu, Matt ;
Lakshman, Haricharan ;
Girod, Bernd .
2015 IEEE International Symposium on Mixed and Augmented Reality, 2015, :31-36
[49]   Exploiting Surroundedness for Saliency Detection: A Boolean Map Approach [J].
Zhang, Jianming ;
Sclaroff, Stan .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (05) :889-902
[50]   Minimum Barrier Salient Object Detection at 80 FPS [J].
Zhang, Jianming ;
Sclaroff, Stan ;
Lin, Zhe ;
Shen, Xiaohui ;
Price, Brian ;
Mech, Radomir .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1404-1412