Dunhuang murals image restoration method based on generative adversarial network

被引:17
作者
Ren, Hui [1 ,2 ,3 ]
Sun, Ke [1 ,2 ,3 ]
Zhao, Fanhua [1 ,2 ,3 ]
Zhu, Xian [1 ,2 ,3 ]
机构
[1] Commun Univ China, Sch Informat & Commun Engn, Beijing 100024, Peoples R China
[2] Minist Culture & Tourism, Key Lab Acoust Visual Technol & Intelligent Contro, Beijing 100024, Peoples R China
[3] Beijing Key Lab Modern Entertainment Technol, Beijing 100024, Peoples R China
关键词
Murals restoration; Generative adversarial network; Binary convolution mechanism; Ternary joint discriminator;
D O I
10.1186/s40494-024-01159-8
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
Murals are an important part of China's cultural heritage. After more than a 1000 years of exposure to the sun and wind, most of these ancient murals have become mottled, with damage such as cracking, mold, and even large-scale detachment. It is an urgent work to restore these damaged murals. The technique of digital restoration of mural images refers to the reconstruction of structures and textures to virtually fill in the damaged areas of the image. Existing digital restoration methods have the problems of incomplete restoration and distortion of local details. In this paper, we propose a generative adversarial network model combining a parallel dual convolutional feature extraction depth generator and a ternary heterogeneous joint discriminator. The generator network is designed with the mechanism of parallel extraction of image features by vanilla convolution and dilated convolution, capturing multi-scale features simultaneously, and reasonable parameter settings reduce the loss of image information. A pixel-level discriminator is proposed to identify the pixel-level defects of the captured image, and its joint global discriminator and local discriminator discriminate the generated image at different levels and granularities. In this paper, we create the Dunhuang murals dataset and validate our method on this dataset, and the experimental results show that the method of this paper has an overall improvement in the evaluation metrics of PSNR and SSIM compared with the comparative methods. The restored resultant image is more in line with the subjective vision of human beings, which achieves the effective restoration of mural images.
引用
收藏
页数:20
相关论文
共 35 条
[1]   Ancient mural restoration based on a modified generative adversarial network [J].
Cao, Jianfang ;
Zhang, Zibang ;
Zhao, Aidi ;
Cui, Hongyan ;
Zhang, Qi .
HERITAGE SCIENCE, 2020, 8 (01)
[2]   Restoration of an ancient temple mural by a local search algorithm of an adaptive sample block [J].
Cao, Jianfang ;
Li, Yanfei ;
Zhang, Qi ;
Cui, Hongyan .
HERITAGE SCIENCE, 2019, 7 (1)
[3]   Ten years of generative adversarial nets (GANs): a survey of the state-of-the-art [J].
Chakraborty, Tanujit ;
Reddy, K. S. Ujjwal ;
Naik, Shraddha M. ;
Panja, Madhurima ;
Manvitha, Bayapureddy .
MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (01)
[4]  
Chen P., 2017, Art Criticism, V20, P45
[5]   Region filling and object removal by exemplar-based image inpainting [J].
Criminisi, A ;
Pérez, P ;
Toyama, K .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2004, 13 (09) :1200-1212
[6]   Ancient mural inpainting via structure information guided two-branch model [J].
Deng, Xiaochao ;
Yu, Ying .
HERITAGE SCIENCE, 2023, 11 (01)
[7]   Image Style Transfer Using Convolutional Neural Networks [J].
Gatys, Leon A. ;
Ecker, Alexander S. ;
Bethge, Matthias .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2414-2423
[8]  
Guo Qi, 2019, 2019 International Conference on Computer Network, Electronic and Automation (ICCNEA). Proceedings, P31, DOI 10.1109/ICCNEA.2019.00016
[9]   Progressive Image Inpainting with Full-Resolution Residual Network [J].
Guo, Zongyu ;
Chen, Zhibo ;
Yu, Tao ;
Chen, Jiale ;
Liu, Sen .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :2496-2504
[10]   Rethinking Image Inpainting via a Mutual Encoder-Decoder with Feature Equalizations [J].
Liu, Hongyu ;
Jiang, Bin ;
Song, Yibing ;
Huang, Wei ;
Yang, Chao .
COMPUTER VISION - ECCV 2020, PT II, 2020, 12347 :725-741