Digital core image reconstruction based on residual self-attention generative adversarial networks

被引:0
|
作者
Lei He
Fuping Gui
Min Hu
Daolun Li
Wenshu Zha
Jieqing Tan
机构
[1] Hefei University of Technology,
来源
Computational Geosciences | 2023年 / 27卷
关键词
Reconstruction; Digital core image; Self-attention mechanism; Residual; Generative adversarial networks;
D O I
暂无
中图分类号
学科分类号
摘要
In order to perform accurate physical analysis of digital core, the reconstruction of high-quality digital core image has become a problem to be resolved at present. In this paper, a digital core image reconstruction method based on the residual self-attention generative adversarial networks is proposed. In the process of digital core image reconstruction, the traditional generative adversarial networks (GANs) can obtain high resolution detail features only by the spatial local point generation in low resolution details, and the far away dependency can only be processed by multiple convolution operations. In view of this, in this paper the residual self-attention block is introduced in the traditional GANs, which can strengthen the correlation learning between features and extract more features. In order to analyze the quality of generated shale images, in this paper the Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) are used to evaluate the consistency of Gaussian distribution between reconstructed shale images and original ones, and the two-point covariance function is used to evaluate the structural similarity between reconstructed shale images and original ones. Plenty experiments show that the reconstructed shale images by the proposed method in the paper are closer to the original images and have better effect, compared to those of the state-of-art methods.
引用
收藏
页码:499 / 514
页数:15
相关论文
共 50 条
  • [21] SA-CapsGAN: Using Capsule Networks with embedded self-attention for Generative Adversarial Network
    Sun, Guangcong
    Ding, Shifei
    Sun, Tongfeng
    Zhang, Chenglong
    NEUROCOMPUTING, 2021, 423 (423) : 399 - 406
  • [22] SADD: Generative Adversarial Networks via Self-attention and Dual Discriminator in Unsupervised Domain Adaptation
    Dai, Zaiyan
    Yang, Jun
    Fan, Anfei
    Jia, Jinyin
    Chen, Junfan
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 473 - 484
  • [23] Attention mechanism-based generative adversarial networks for image cartoonization
    Zhao, Wenqing
    Zhu, Jianlin
    Li, Ping
    Huang, Jin
    Tang, Junwei
    VISUAL COMPUTER, 2024, 40 (06): : 3971 - 3984
  • [24] SAM-GAN: Self-Attention supporting Multi-stage Generative Adversarial Networks for text-to-image synthesis
    Peng, Dunlu
    Yang, Wuchen
    Liu, Cong
    Lu, Shuairui
    NEURAL NETWORKS, 2021, 138 : 57 - 67
  • [25] Digital Core Modeling Based on Pretrained Generative Adversarial Neural Networks
    Zhang, Qing
    Wang, Benqiang
    Liang, Xusheng
    Li, Yizhen
    He, Feng
    Hao, Yuexiang
    GEOFLUIDS, 2022, 2022
  • [26] Channel Attention Image Steganography With Generative Adversarial Networks
    Tan, Jingxuan
    Liao, Xin
    Liu, Jiate
    Cao, Yun
    Jiang, Hongbo
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (02): : 888 - 903
  • [27] NOLSAGAN: A NO-Label Self-Attention Segmentation Method Based on Feature Reconstruction Using Generative Adversarial Networks for Optical Fiber End-Face
    Mei, Shuang
    Men, Xiaotan
    Diao, Zhaolei
    Dong, Hongbo
    Wen, Guojun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 1
  • [28] Image deblurring method based on self-attention and residual wavelet transform
    Zhang, Bing
    Sun, Jing
    Sun, Fuming
    Wang, Fasheng
    Zhu, Bing
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 244
  • [29] Image super-resolution reconstruction based on self-attention GAN
    Wang X.-S.
    Chao J.
    Cheng Y.-H.
    Kongzhi yu Juece/Control and Decision, 2021, 36 (06): : 1324 - 1332
  • [30] Face Super-Resolution Reconstruction Based on Self-Attention Residual Network
    Liu, Qing-Ming
    Jia, Rui-Sheng
    Zhao, Chao-Yue
    Liu, Xiao-Ying
    Sun, Hong-Mei
    Zhang, Xing-Li
    IEEE ACCESS, 2020, 8 : 4110 - 4121