Subjective and Objective Quality Assessment of Rendered Human Avatar Videos in Virtual Reality

被引:0
|
作者
Chen, Yu-Chih [1 ]
Saha, Avinab [1 ]
Chapiro, Alexandre [2 ]
Hane, Christian [2 ]
Bazin, Jean-Charles [2 ]
Qiu, Bo [2 ]
Zanetti, Stefano [2 ]
Katsavounidis, Ioannis [2 ]
Bovik, Alan C. [1 ]
机构
[1] Univ Texas Austin, Dept Elect & Comp Engn, Lab Image & Video Engn LIVE, Austin, TX 94025 USA
[2] Meta Platforms Inc, Menlo Pk, CA 94025 USA
基金
美国国家科学基金会;
关键词
Avatars; Videos; Three-dimensional displays; Quality assessment; Monitoring; Databases; Predictive models; Visualization; Solid modeling; Image coding; Virtual reality; video quality assessment; 3D mesh; human avatar video; six degrees of freedom; VISUAL QUALITY; POINT CLOUDS; MESH; INFORMATION;
D O I
10.1109/TIP.2024.3468881
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study the visual quality judgments of human subjects on digital human avatars (sometimes referred to as "holograms" in the parlance of virtual reality [VR] and augmented reality [AR] systems) that have been subjected to distortions. We also study the ability of video quality models to predict human judgments. As streaming human avatar videos in VR or AR become increasingly common, the need for more advanced human avatar video compression protocols will be required to address the tradeoffs between faithfully transmitting high-quality visual representations while adjusting to changeable bandwidth scenarios. During transmission over the internet, the perceived quality of compressed human avatar videos can be severely impaired by visual artifacts. To optimize trade-offs between perceptual quality and data volume in practical workflows, video quality assessment (VQA) models are essential tools. However, there are very few VQA algorithms developed specifically to analyze human body avatar videos, due, at least in part, to the dearth of appropriate and comprehensive datasets of adequate size. Towards filling this gap, we introduce the LIVE-Meta Rendered Human Avatar VQA Database, which contains 720 human avatar videos processed using 20 different combinations of encoding parameters, labeled by corresponding human perceptual quality judgments that were collected in six degrees of freedom VR headsets. To demonstrate the usefulness of this new and unique video resource, we use it to study and compare the performances of a variety of state-of-the-art Full Reference and No Reference video quality prediction models, including a new model called HoloQA. As a service to the research community, we publicly releases the metadata of the new database at https://live.ece.utexas.edu/research/LIVE-Meta-rendered-human-avatar/index.html.
引用
收藏
页码:5740 / 5754
页数:15
相关论文
共 50 条
  • [41] Subjective and Objective Quality Assessment of MPEG-2, H.264 and H.265 Videos
    Bajcinovci, Viliams
    Vranjes, Mario
    Babic, Danijel
    Kovacevic, Branimir
    PROCEEDINGS OF 2017 INTERNATIONAL SYMPOSIUM ELMAR, 2017, : 73 - 77
  • [42] Objective and subjective assessment of stereoscopically separated labels in augmented reality
    Peterson, Stephen D.
    Axholt, Magnus
    Ellis, Stephen R.
    COMPUTERS & GRAPHICS-UK, 2009, 33 (01): : 23 - 33
  • [43] Subjective QoE of 360-Degree Virtual Reality Videos and Machine Learning Predictions
    Anwar, Muhammad Shahid
    Wang, Jing
    Khan, Wahab
    Ullah, Asad
    Ahmad, Sadique
    Fei, Zesong
    IEEE ACCESS, 2020, 8 (08): : 148084 - 148099
  • [44] A Comparison of Robotic Simulation Performance on Basic Virtual Reality Skills: Simulator Subjective Versus Objective Assessment Tools
    Dubin, Ariel K.
    Smith, Roger
    Julian, Danielle
    Tanaka, Alyssa
    Mattingly, Patricia
    JOURNAL OF MINIMALLY INVASIVE GYNECOLOGY, 2017, 24 (07) : 1185 - 1190
  • [45] Subjective Quality Assessment of User-Generated 360° Videos
    Fang, Yuming
    Yao, Yiru
    Sui, Xiangjie
    Ma, Kede
    2023 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW, 2023, : 723 - 724
  • [46] On the Number of Participants Needed for Subjective Quality Assessment of 360° Videos
    Zepernick, Hans-Jurgen
    Elwardy, Majed
    Hu, Yan
    Sundstedt, Veronica
    2019 13TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS), 2019,
  • [47] CVIQD: SUBJECTIVE QUALITY EVALUATION OF COMPRESSED VIRTUAL REALITY IMAGES
    Sun, Wei
    Gu, Ke
    Zhai, Guangtao
    Ma, Siwei
    Lin, Weisi
    Le Callet, Patrick
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 3450 - 3454
  • [48] Subjective visual vertical assessment with mobile virtual reality system
    Uloziene, Ingrida
    Totiliene, Milda
    Paulauskas, Andrius
    Blazauskas, Tomas
    Marozas, Vaidotas
    Kaski, Diego
    Ulozas, Virgilijus
    MEDICINA-LITHUANIA, 2017, 53 (06): : 394 - 402
  • [49] Measuring quality of experience for 360-degree videos in virtual reality
    Anwar, Muhammad Shahid
    Wang, Jing
    Ullah, Asad
    Khan, Wahab
    Ahmad, Sadique
    Fei, Zesong
    SCIENCE CHINA-INFORMATION SCIENCES, 2020, 63 (10)
  • [50] Measuring quality of experience for 360-degree videos in virtual reality
    Muhammad Shahid ANWAR
    Jing WANG
    Asad ULLAH
    Wahab KHAN
    Sadique AHMAD
    Zesong FEI
    Science China(Information Sciences), 2020, 63 (10) : 108 - 122