IESRGAN: Enhanced U-Net Structured Generative Adversarial Network for Remote Sensing Image Super-Resolution Reconstruction

被引:9
作者
Yue, Xiaohan [1 ]
Liu, Danfeng [1 ]
Wang, Liguo [1 ]
Benediktsson, Jon Atli [2 ]
Meng, Linghong [1 ]
Deng, Lei [1 ]
机构
[1] Dalian Minzu Univ, Coll Informat & Commun Engn, Dalian 116600, Peoples R China
[2] Univ Iceland, Fac Elect & Comp Engn, IS-107 Reykjavik, Iceland
关键词
super-resolution reconstruction; remote sensing images; generative adversarial networks;
D O I
10.3390/rs15143490
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
With the continuous development of modern remote sensing satellite technology, high-resolution (HR) remote sensing image data have gradually become widely used. However, due to the vastness of areas that need to be monitored and the difficulty in obtaining HR images, most monitoring projects still rely on low-resolution (LR) data for the regions being monitored. The emergence of remote sensing image super-resolution (SR) reconstruction technology effectively compensates for the lack of original HR images. This paper proposes an Improved Enhanced Super-Resolution Generative Adversarial Network (IESRGAN) based on an enhanced U-Net structure for a 4x scale detail reconstruction of LR images using NaSC-TG2 remote sensing images. In this method, in-depth research has been performed and consequent improvements have been made to the generator and discriminator within the GAN network. Specifically, before introducing Residual-in-Residual Dense Blocks (RRDB), in the proposed method, input images are subjected to reflective padding to enhance edge information. Meanwhile, a U-Net structure is adopted for the discriminator, incorporating spectral normalization to focus on semantic and structural changes between real and fake images, thereby improving generated image quality and GAN performance. To evaluate the effectiveness and generalization ability of our proposed model, experiments were conducted on multiple real-world remote sensing image datasets. Experimental results demonstrate that IESRGAN exhibits strong generalization capabilities while delivering outstanding performance in terms of PSNR, SSIM, and LPIPS image evaluation metrics.
引用
收藏
页数:18
相关论文
共 44 条
  • [1] TESR: Two-Stage Approach for Enhancement and Super-Resolution of Remote Sensing Images
    Ali, Anas M.
    Benjdira, Bilel
    Koubaa, Anis
    Boulila, Wadii
    El-Shafai, Walid
    [J]. REMOTE SENSING, 2023, 15 (09)
  • [2] The 2018 PIRM Challenge on Perceptual Image Super-Resolution
    Blau, Yochai
    Mechrez, Roey
    Timofte, Radu
    Michaeli, Tomer
    Zelnik-Manor, Lihi
    [J]. COMPUTER VISION - ECCV 2018 WORKSHOPS, PT V, 2019, 11133 : 334 - 355
  • [3] Linear interpolation revitalized
    Blu, T
    Thévenaz, P
    Unser, M
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2004, 13 (05) : 710 - 719
  • [4] Chen YP, 2017, Arxiv, DOI arXiv:1707.01629
  • [5] Remote Sensing Image Scene Classification: Benchmark and State of the Art
    Cheng, Gong
    Han, Junwei
    Lu, Xiaoqiang
    [J]. PROCEEDINGS OF THE IEEE, 2017, 105 (10) : 1865 - 1883
  • [6] Image Super-Resolution Using Deep Convolutional Networks
    Dong, Chao
    Loy, Chen Change
    He, Kaiming
    Tang, Xiaoou
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) : 295 - 307
  • [7] Cao QD, 2020, Arxiv, DOI [arXiv:1807.01688, 10.1007/s11069-020-04133-2]
  • [8] Single Space Object Image Denoising and Super-Resolution Reconstructing Using Deep Convolutional Networks
    Feng, Xubin
    Su, Xiuqin
    Shen, Junge
    Jin, Humin
    [J]. REMOTE SENSING, 2019, 11 (16)
  • [9] Single-frame super-resolution in remote sensing: a practical overview
    Fernandez-Beltran, Ruben
    Latorre-Carmona, Pedro
    Pla, Filiberto
    [J]. INTERNATIONAL JOURNAL OF REMOTE SENSING, 2017, 38 (01) : 314 - 354
  • [10] Generative Adversarial Networks
    Goodfellow, Ian
    Pouget-Abadie, Jean
    Mirza, Mehdi
    Xu, Bing
    Warde-Farley, David
    Ozair, Sherjil
    Courville, Aaron
    Bengio, Yoshua
    [J]. COMMUNICATIONS OF THE ACM, 2020, 63 (11) : 139 - 144