No-Reference 3D Point Cloud Quality Assessment Using Multi-View Projection and Deep Convolutional Neural Network

被引:14
作者
Bourbia, Salima [1 ,2 ]
Karine, Ayoub [1 ]
Chetouani, Aladine [3 ]
El Hassouni, Mohammed [2 ]
Jridi, Maher [1 ]
机构
[1] ISEN Yncrea Ouest, L bISEN, Vis AD, F-44470 Carquefou, France
[2] Mohammed V Univ, FLSH, FSR, Rabat 08007, Morocco
[3] Univ Orleans, Lab PRISME, F-45100 Orleans, France
关键词
Measurement; Feature extraction; Visualization; Point cloud compression; Three-dimensional displays; Image color analysis; Convolutional neural networks; Point cloud; quality assessment; point cloud rendering; convolutional neural network (CNN);
D O I
10.1109/ACCESS.2023.3247191
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Digital representation of 3D content in the form of 3D point clouds (PC) has gained increasing interest and has emerged in various computer vision applications. However, various degradation may appear on the PC during acquisition, transmission, or treatment steps in the 3D processing pipeline. Therefore, several Full-Reference, Reduced-Reference, and No-Reference metrics have been proposed to estimate the visual quality of PC. However, Full-Reference and Reduced-Reference metrics require reference information, which is not accessible in real-world applications, and No-Reference metrics still lack precision in evaluating the PC quality. In this context, we propose a novel deep learning-based method for No-Reference Point Cloud Quality Assessment (NR-PCQA) that aims to automatically predict the perceived visual quality of the PC without using the reference content. More specifically, in order to imitate the human visual system during the PC quality evaluation that captures the geometric and color degradation, we render the PC into different 2D views using a perspective projection. Then, the projected 2D views are divided into patches that are fed to a Convolutional Neural Network (CNN) to learn sophisticated and discriminative visual quality features for evaluating the local quality of each patch. Finally, the overall quality score of the PC is obtained by pooling the quality score patches. We conduct extensive experiments on three benchmark databases: ICIP2020, SJTU, and WPC, and we compare the proposed model to the existing Full-Reference, Reduced-Reference, and No-Reference state-of-the-art methods. Based on the experimental results, our proposed model achieves high correlations with the subjective quality scores and outperforms the state-of-the-art methods.
引用
收藏
页码:26759 / 26772
页数:14
相关论文
共 58 条
[1]  
Alexiou E., 2017, 2017 ninth international conference on quality of Multimedia experience (QoMEX), P1, DOI DOI 10.1109/QOMEX.2017.7965681
[2]   TOWARDS A POINT CLOUD STRUCTURAL SIMILARITY METRIC [J].
Alexiou, Evangelos ;
Ebrahimi, Touradj .
2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2020,
[3]  
Alexiou E, 2018, IEEE INT CON MULTI
[4]  
Alexiou Evangelos, 2019, INT WORKSHOP QUALITY
[5]   GIFT: Towards Scalable 3D Shape Retrieval [J].
Bai, Song ;
Bai, Xiang ;
Zhou, Zhichao ;
Zhang, Zhaoxiang ;
Tian, Qi ;
Latecki, Longin Jan .
IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (06) :1257-1271
[6]   On the use of deep learning for blind image quality assessment [J].
Bianco, Simone ;
Celona, Luigi ;
Napoletano, Paolo ;
Schettini, Raimondo .
SIGNAL IMAGE AND VIDEO PROCESSING, 2018, 12 (02) :355-362
[7]   Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment [J].
Bosse, Sebastian ;
Maniry, Dominique ;
Mueller, Klaus-Robert ;
Wiegand, Thomas ;
Samek, Wojciech .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (01) :206-219
[8]  
Bosse S, 2016, IEEE IMAGE PROC, P3773, DOI 10.1109/ICIP.2016.7533065
[9]   A MULTI-TASK CONVOLUTIONAL NEURAL NETWORK FOR BLIND STEREOSCOPIC IMAGE QUALITY ASSESSMENT USING NATURALNESS ANALYSIS [J].
Bourbia, Salima ;
Karine, Ayoub ;
Chetouani, Aladine ;
El Hassoun, Mohammed .
2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, :1434-1438
[10]  
Bruder G, 2014, 2014 IEEE SYMPOSIUM ON 3D USER INTERFACES (3DUI), P161, DOI 10.1109/3DUI.2014.6798870