Fragments Inpainting for Tomb Murals Using a Dual-Attention Mechanism GAN with Improved Generators

被引:4
|
作者
Wu, Meng [1 ,2 ]
Chang, Xiao [1 ]
Wang, Jia [3 ]
机构
[1] Xian Univ Architecture & Technol, Sch Informat & Control Engn, Xian 710055, Peoples R China
[2] Xian Univ Architecture & Technol, Inst Interdisciplinary & Innovate Res, Xian 710055, Peoples R China
[3] Shaanxi Hist Museum, Xian 710061, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 06期
基金
中国国家自然科学基金;
关键词
multiscale feature aggregation; double attention; GAN; image inpainting; tomb mural; RESTORATION; IMAGE; PAINTINGS; NETWORK;
D O I
10.3390/app13063972
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
As the only underground mural in the collection, the tomb murals are subject to damage due to temperature, humidity, and foundation settlement changes. Traditional mural inpainting takes a long time and requires experts to draw it manually. Therefore, the need for digital inpainting is increasing to save time and costs. Due to the scarcity of samples and the variety of damage, the image features are scattered and partially sparse, and the colors are less vivid than in other images. Traditional deep learning inpainting causes information loss and generates irrational structures. The generative adversarial network is, recently, a more effective method. Therefore, this paper presents an inpainting model based on dual-attention multiscale feature aggregation and an improved generator. Firstly, an improved residual prior and attention mechanism is added to the generator module to preserve the image structure. Secondly, the model combines spatial and channel attention with multiscale feature aggregation to change the mapping network structure and improve the inpainting accuracy. Finally, the segmental loss function and its training method are improved.The experimental results show that the results of using signal-to-noise ratio (PSNR), structural similarity (SSIM), and mean square error (MSE) on epitaxial mask, crack mask, random small mask, and random large mask are better than other methods. It demonstrates the performance of this paper in inpainting different diseases of murals. It can be used as a reference for experts in manual inpainting, saving the cost and time of manual inpainting.
引用
收藏
页数:19
相关论文
共 19 条
  • [1] Dadnet: dual-attention detection network for crack segmentation on tomb murals
    Wu, Meng
    Chai, Ruochang
    Zhang, Yongqin
    Lu, Zhiyong
    HERITAGE SCIENCE, 2024, 12 (01):
  • [2] Dual-Attention GAN for Large-Pose Face Frontalization
    Yin, Yu
    Jiang, Songyao
    Robinson, Joseph P.
    Fu, Yun
    2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020), 2020, : 249 - 256
  • [3] OMOFuse: An Optimized Dual-Attention Mechanism Model for Infrared and Visible Image Fusion
    Yuan, Jianye
    Li, Song
    MATHEMATICS, 2023, 11 (24)
  • [4] DAM-GAN : IMAGE INPAINTING USING DYNAMIC ATTENTION MAP BASED ON FAKE TEXTURE DETECTION
    Cha, Dongmin
    Kim, Daijin
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4883 - 4887
  • [5] A superior image inpainting scheme using Transformer-based self-supervised attention GAN model
    Zhou, Meili
    Liu, Xiangzhen
    Yi, Tingting
    Bai, Zongwen
    Zhang, Pei
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 233
  • [6] Image Inpainting of Thangka Murals Using Edge-Assisted Feature Fusion and Self Attention Based Local Refine Network
    Jia, Ying
    Li, Hang
    Fang, Jie
    Chen, Xin
    Ji, Liqi
    Wang, Nianyi
    IEEE ACCESS, 2023, 11 : 84360 - 84370
  • [7] CS-MRI Reconstruction Using an Improved GAN with Dilated Residual Networks and Channel Attention Mechanism
    Li, Xia
    Zhang, Hui
    Yang, Hao
    Li, Tie-Qiang
    SENSORS, 2023, 23 (18)
  • [8] A Deep-Learning Method for Remaining Useful Life Prediction of Power Machinery via Dual-Attention Mechanism
    Wang, Fan
    Liu, Aihua
    Qu, Chunyang
    Xiong, Ruolan
    Chen, Lu
    SENSORS, 2025, 25 (02)
  • [9] Structure-aware multi-view image inpainting using dual consistency attention
    Xiang, Hongyue
    Min, Weidong
    Han, Qing
    Zha, Cheng
    Liu, Qian
    Zhu, Meng
    INFORMATION FUSION, 2024, 104
  • [10] Image caption generation using a dual attention mechanism
    Padate, Roshni
    Jain, Amit
    Kalla, Mukesh
    Sharma, Arvind
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 123