Super-resolution Method for Rendered Contents by Multi-scale Feature Fusion with High-resolution Geometry Buffers

被引:0
作者
Zhang H.-N. [1 ]
Guo J. [1 ]
Qin H.-Y. [1 ]
Fu X.-H. [1 ]
Guo Y.-W. [1 ]
机构
[1] State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing
来源
Ruan Jian Xue Bao/Journal of Software | 2024年 / 35卷 / 06期
关键词
feature fusion; geometry buffer; image super-resolution; neural network; rendering;
D O I
10.13328/j.cnki.jos.006921
中图分类号
学科分类号
摘要
With the development of modern information technology, people’s demand for high resolution and realistic visual perception of image display devices has increased, which has put forward higher requirements for computer software and hardware and brought many challenges to rendering technology in terms of performance and workload. Using machine learning technologies such as deep neural networks to improve the quality and performance of rendered images has become a popular research method in computer graphics, while upsampling low-resolution images through network inference to obtain clearer high-resolution images is an important way to improve image generation performance and ensure high-resolution details. The geometry buffers (G-buffers) generated by the rendering engine in the rendering process contain much semantic information, which help the network learn scene information and features effectively and then improve the quality of upsampling results. In this study, a super-resolution method for rendered contents in low resolution based on deep neural networks is designed. In addition to the color image of the current frame, the method uses high-resolution G-buffers to assist in the calculation and reconstruct the high-resolution content details. The method also leverages a new strategy to fuse the features of high-resolution buffers and low-resolution images, which implements a multi-scale fusion of different feature information in a specific fusion module. Experiments demonstrate the effectiveness of the proposed fusion strategy and module, and the proposed method shows obvious advantages, especially in maintaining high-resolution details, when compared with other image super-resolution methods. © 2024 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:3052 / 3068
页数:16
相关论文
共 42 条
[1]  
Kaplanyan AS, Sochenov A, Leimkuhler T, Okunev M, Goodall T, Rufo G., DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos, ACM Trans. on Graphics, 38, 6, (2019)
[2]  
Jo Y, Oh SW, Kang J, Kim SJ., Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation, Proc. of the 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 3224-3232, (2018)
[3]  
Dong C, Loy CC, Tang XO., Accelerating the super-resolution convolutional neural network, Proc. of the 14th European Conf. on Computer Vision, pp. 391-407, (2016)
[4]  
(2022)
[5]  
Burnes A., NVIDIA DLSS 2.0: A big leap in AI rendering, (2020)
[6]  
DirectX-Specs, Variable Rate Shading, (2022)
[7]  
Xiao K, Liktor G, Vaidyanathan K., Coarse pixel shading with temporal supersampling, Proc. of the 2018 ACM SIGGRAPH Symp. on Interactive 3D Graphics and Games, (2018)
[8]  
Sakai H, Nabata K, Yasuaki S, Iwasaki K., Error estimation for many-light rendering with supersampling, Proc. of the 2018 SIGGRAPH Asia Technical Briefs, (2018)
[9]  
Xiao L, Nouri S, Chapman M, Fix A, Lanman D, Kaplanyan A., Neural supersampling for real-time rendering, ACM Trans. on Graphics, 39, 4, (2020)
[10]  
Kalantari NK, Bako S, Sen P., A machine learning approach for filtering Monte Carlo noise, ACM Trans. on Graphics, 34, 4, (2015)