No Reference 3D Mesh Quality Assessment Learned From Quality Scores on 2D Projections

被引:0
作者
Ibork, Zaineb [1 ,2 ]
Nouri, Anass [1 ,2 ]
Lezoray, Olivier [2 ]
Charrier, Christophe [2 ]
Touahni, Raja [1 ]
机构
[1] Ibn Tofail Univ, Fac Sci, SETIME Lab, Informat Proc & AI Team, Kenitra 14000, Morocco
[2] Normandie Univ, UNICAEN, ENSICAEN, GREYC,CNRS, F-14000 Caen, France
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Three-dimensional displays; Feature extraction; Quality assessment; Visualization; Rendering (computer graphics); Databases; Vectors; Convolutional neural networks; 3D mesh; mesh visual quality assessment; convolutional neural network; deep learning; no reference quality assessment; BRISQUE; ERROR;
D O I
10.1109/ACCESS.2024.3435377
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the widespread availability and utilization of 3D meshes across various applications, the need for accurately assessing their visual quality has become increasingly important. Despite the significance of this task, the literature offers few No-Reference (NR) approaches for evaluating the visual quality of 3D meshes. In response to this gap, this paper proposes a novel NR approach tailored specifically to score the quality of 3D meshes. After rendering a 3D mesh into 2D views and patches, a pre-trained convolutional neural network automatically extracts deep features from. These features are then employed in a Multi-Layer Perceptron regressor to predict the quality score of the rendered images. The obtained scores are combined with their corresponding BRISQUE scores, and an additional MLP regressor is used to predict the final score. We present experimental results demonstrating the effectiveness and robustness of our approach across a diverse range of 3D mesh datasets. Comparative analyses with existing NR methods underscore the superior performance and versatility of the proposed approach. Overall, this paper contributes to the advancement of NR techniques for assessing 3D mesh quality, offering a valuable tool for researchers, practitioners, and developers working with 3D models across various domains.
引用
收藏
页码:106924 / 106936
页数:13
相关论文
共 50 条
  • [42] A novel spatial pooling method for 3D mesh quality assessment based on percentile weighting strategy
    Feng, Xiang
    Wan, Wanggen
    Xu, Richard Yi Da
    Perry, Stuart
    Li, Pengfei
    Zhu, Song
    COMPUTERS & GRAPHICS-UK, 2018, 74 : 12 - 22
  • [43] Transfer Learning for Nonrigid 2D/3D Cardiovascular Images Registration
    Guan, Shaoya
    Wang, Tianmiao
    Sun, Kai
    Meng, Cai
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (09) : 3300 - 3309
  • [44] 3D object understanding from 2D images
    Wang, PSP
    INTERNATIONAL SYMPOSIUM ON MULTISPECTRAL IMAGE PROCESSING, 1998, 3545 : 33 - 43
  • [45] A Survey and Task-Based Quality Assessment of Static 2D Colormaps
    Bernard, Juergen
    Steiger, Martin
    Mittelstaedt, Sebastian
    Thum, Simon
    Keim, Daniel
    Kohlhammera, Joern
    VISUALIZATION AND DATA ANALYSIS 2015, 2015, 9397
  • [46] Human motion quality assessment toward sophisticated sports scenes based on deeply-learned 3D CNN model
    Li, Yedong
    He, Hongmei
    Zhang, Zhixin
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2020, 71
  • [47] Saliency inspired quality assessment of stereoscopic 3D video
    Amin Banitalebi-Dehkordi
    Panos Nasiopoulos
    Multimedia Tools and Applications, 2018, 77 : 26055 - 26082
  • [48] DEEP LEARNING FOR OBJECTIVE QUALITY ASSESSMENT OF 3D IMAGES
    Mocanu, Decebal Constantin
    Exarchakos, Georgios
    Liotta, Antonio
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 758 - 762
  • [49] Saliency inspired quality assessment of stereoscopic 3D video
    Banitalebi-Dehkordi, Amin
    Nasiopoulos, Panos
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (19) : 26055 - 26082
  • [50] Quality Assessment Framework for 3D Face Reconstruction Models
    Zhu, Yunrui
    Luo, Xun
    Shi, Chu
    Wang, Xinyu
    Kong, Dehao
    2019 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV), 2019, : 293 - 295