Neural Point Cloud Rendering via Depth Peeling Multi-Projection and Temporary Refine

被引:0
|
作者
Ye K. [1 ]
Pan Q. [1 ]
Ren Z. [1 ]
机构
[1] State Key Laboratory of CAD&CG, Zhejiang University, Hangzhou
关键词
depth peeling; hair point cloud; neural rendering; temporary stability;
D O I
10.3724/SP.J.1089.2023.19419
中图分类号
学科分类号
摘要
To tackle the problem that the existing point cloud-based neural rendering network cannot render high-quality hair with temporal stability, a depth peeling and temporal refine network is presented. Depth peeling method projects point clouds in different layers; fuses the results to adapt to the translucency of the hair; input the trained results into the temporal refine network. This module uses the reprojection of the point cloud between adjacent frames to obtain the dependency relationship between the current frame and the previous frames, and generates the final result of the current frame with temporal stability. The experiment uses high-quality hair datasets generated by ray tracing, and the final results show that compared with the existing methods, the proposed method can obtain better temporal stability and rendering results. © 2023 Institute of Computing Technology. All rights reserved.
引用
收藏
页码:666 / 684
页数:18
相关论文
共 21 条
  • [1] Tewari A, Fried O, Thies J, Et al., State of the art on neural rendering, Computer Graphics Forum, 39, 2, pp. 701-727, (2020)
  • [2] Hedman P, Philip J, Price T, Et al., Deep blending for free-viewpoint image-based rendering, ACM Transactions on Graphics, 37, 6, (2018)
  • [3] Xu Z X, Bi S, Sunkavalli K, Et al., Deep view synthesis from sparse photometric images, ACM Transactions on Graphics, 38, 4, (2019)
  • [4] Saito S, Huang Z, Natsume R, Et al., PIFu: pixel-aligned implicit function for high-resolution clothed human digitization, Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2304-2314, (2019)
  • [5] Lombardi S, Simon T, Saragih J, Et al., Neural volumes: learning dynamic renderable volumes from images
  • [6] Meshry M, Goldman D B, Khamis S, Et al., Neural rerendering in the wild, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6878-6887, (2019)
  • [7] Shum H, Kang S B., A review of image-based rendering techniques, Proceedings of the IEEE/SPIE Visual Communications and Image Processing, pp. 2-13, (2000)
  • [8] Marschner S R, Jensen H W, Cammarano M, Et al., Light scattering from human hair fibers, ACM Transactions on Graphics, 22, 3, pp. 780-791, (2003)
  • [9] D'Eon E, Francois G, Hill M, Et al., An energy‐conserving hair reflectance model, Computer Graphics Forum, 30, 4, pp. 1181-1187, (2011)
  • [10] Yan L Q, Tseng C W, Jensen H W, Et al., Physically-accurate fur reflectance: modeling, measurement and rendering, ACM Transactions on Graphics, 34, 6, (2015)