Perceptual point cloud quality assessment for immersive metaverse experience

被引:0
作者
Cheng, Baoping [1 ,2 ]
Luo, Lei [3 ,4 ]
He, Ziyang [3 ]
Zhu, Ce [4 ]
Tao, Xiaoming [1 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[2] China Mobile Hangzhou Informat Technol Co Ltd, Hangzhou 310000, Peoples R China
[3] Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Chongqing 400065, Peoples R China
[4] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
基金
中国国家自然科学基金;
关键词
Metaverse; Point cloud; Quality assessment; Point feature histogram; Earth mover's distance; ERROR;
D O I
10.1016/j.dcan.2024.07.001
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Perceptual quality assessment for point cloud is critical for immersive metaverse experience and is a challenging task. Firstly, because point cloud is formed by unstructured 3D points that makes the topology more complex. Secondly, the quality impairment generally involves both geometric attributes and color properties, where the measurement of the geometric distortion becomes more complex. We propose a perceptual point cloud quality assessment model that follows the perceptual features of Human Visual System (HVS) and the intrinsic characteristics of the point cloud. The point cloud is first pre-processed to extract the geometric skeleton keypoints with graph filtering-based re-sampling, and local neighboring regions around the geometric skeleton keypoints are constructed by K-Nearest Neighbors (KNN) clustering. For geometric distortion, the Point Feature Histogram (PFH) is extracted as the feature descriptor, and the Earth Mover's Distance (EMD) between the PFHs of the corresponding local neighboring regions in the reference and the distorted point clouds is calculated as the geometric quality measurement. For color distortion, the statistical moments between the corresponding local neighboring regions are computed as the color quality measurement. Finally, the global perceptual quality assessment model is obtained as the linear weighting aggregation of the geometric and color quality measurement. The experimental results on extensive datasets show that the proposed method achieves the leading performance as compared to the state-of-the-art methods with less computing time. Meanwhile, the experimental results also demonstrate the robustness of the proposed method across various distortion types. The source codes are available at https://github.com/llsurreal919 /PointCloudQualityAssessment.
引用
收藏
页码:806 / 817
页数:12
相关论文
共 37 条
[21]  
Perry S, 2020, IEEE IMAGE PROC, P3428, DOI [10.1109/icip40778.2020.9191308, 10.1109/ICIP40778.2020.9191308]
[22]   Radar Point Clouds Processing for Human Activity Classification Using Convolutional Multilinear Subspace Learning [J].
Qiao, Xingshuai ;
Feng, Yuan ;
Liu, Shengheng ;
Shan, Tao ;
Tao, Ran .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
[23]  
Rusu RB, 2009, IEEE INT CONF ROBOT, P1848
[24]   Learning Informative Point Classes for the Acquisition of Object Model Maps [J].
Rusu, Radu Bogdan ;
Marton, Zoltan Csaba ;
Blodow, Nico ;
Beetz, Michael .
2008 10TH INTERNATIONAL CONFERENCE ON CONTROL AUTOMATION ROBOTICS & VISION: ICARV 2008, VOLS 1-4, 2008, :643-650
[25]  
Schwarz S., 2019, Document N18175, ISO/IEC JTC1/SC29/WG11
[26]  
Tian D, 2017, IEEE IMAGE PROC, P3460, DOI 10.1109/ICIP.2017.8296925
[27]  
VQEG, Final report from the video quality experts group on the validation of objective models of video quality assessment
[28]   Cross-Dataset Point Cloud Recognition Using Deep-Shallow Domain Adaptation Network [J].
Wang, Feiyu ;
Li, Wen ;
Xu, Dong .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :7364-7377
[29]   Image quality assessment: From error visibility to structural similarity [J].
Wang, Z ;
Bovik, AC ;
Sheikh, HR ;
Simoncelli, EP .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2004, 13 (04) :600-612
[30]   EPES: Point Cloud Quality Modeling Using Elastic Potential Energy Similarity [J].
Xu, Yiling ;
Yang, Qi ;
Yang, Le ;
Hwang, Jenq-Neng .
IEEE TRANSACTIONS ON BROADCASTING, 2022, 68 (01) :33-42