Cloud-Guided Fusion With SAR-to-Optical Translation for Thick Cloud Removal

被引:2
作者
Xiang, Xuanyu [1 ]
Tan, Yihua [1 ]
Yan, Longfei [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Natl Key Lab Multispectral Informat Intelligent Pr, Wuhan 430074, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2024年 / 62卷
基金
中国国家自然科学基金;
关键词
Clouds; Cloud computing; Optical imaging; Radar polarimetry; Fats; Optical sensors; Feature extraction; Cloud removal; data fusion; deep learning; generative adversarial network (GAN); synthetic aperture radar (SAR)-to-optical translation; IMAGERY; NETWORK;
D O I
10.1109/TGRS.2024.3431556
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Deep learning has been widely used in thick cloud removal (TCR) for optical satellite images. Since thick clouds completely block the surface, synthetic aperture radar (SAR) images have recently been used to assist in the recovery of occluded information. However, this approach faces several challenges: 1) the significant domain gap between SAR and optical features can cause interference in recovering occluded optical information from SAR and 2) the TCR methods need to distinguish between cloudy and non-cloudy regions; otherwise, inconsistencies may arise between the recovered regions and the remaining non-cloudy regions. To this end, we propose a new SAR-assisted TCR method based on a two-step fusion framework, which consists of the feature alignment translation (FAT) network and the cloud-guided fusion (CGF) network. First, the FAT leverages the common features between SAR and optical images to translate SAR images into corresponding optical images, thus recovering the occluded information. Considering the gap between the translated images and the real cloud-free images, the CGF utilizes cloudy images to further refine the translated images, resulting in the cloud-removed images. In the CGF, cloud distribution is predicted to distinguish between cloudy and non-cloudy regions. Then, the cloud distribution is used to guide the refinement of recovered regions using non-cloudy regions. Extensive experiments on both simulated and real datasets show that the proposed algorithm achieves better performance compared with the state-of-the-art methods.
引用
收藏
页数:15
相关论文
共 39 条
  • [1] SAR TO OPTICAL IMAGE SYNTHESIS FOR CLOUD REMOVAL WITH GENERATIVE ADVERSARIAL NETWORKS
    Bermudez, J. D.
    Happ, P. N.
    Oliveira, D. A. B.
    Feitosa, R. Q.
    [J]. ISPRS TC I MID-TERM SYMPOSIUM INNOVATIVE SENSING - FROM SENSORS TO METHODS AND APPLICATIONS, 2018, 4-1 : 5 - 11
  • [2] Synthesis of Multispectral Optical Images From SAR/Optical Multitemporal Data Using Conditional Generative Adversarial Networks
    Bermudez, Jose D.
    Happ, Patrick N.
    Feitosa, Raul Q.
    Oliveira, Dario A. B.
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2019, 16 (08) : 1220 - 1224
  • [3] Cloud Removal with SAR-Optical Data Fusion and Graph-Based Feature Aggregation Network
    Chen, Shanjing
    Zhang, Wenjuan
    Li, Zhen
    Wang, Yuxi
    Zhang, Bing
    [J]. REMOTE SENSING, 2022, 14 (14)
  • [4] Thick Clouds Removing From Multitemporal Landsat Images Using Spatiotemporal Neural Networks
    Chen, Yang
    Weng, Qihao
    Tang, Luliang
    Zhang, Xia
    Bilal, Muhammad
    Li, Qingquan
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [5] Cloud Removal in Remote Sensing Images Using Generative Adversarial Networks and SAR-to-Optical Image Translation
    Darbaghshahi, Faramarz Naderi
    Mohammadi, Mohammad Reza
    Soryani, Mohsen
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [6] Removal of Optically Thick Clouds from Multi-Spectral Satellite Images Using Multi-Frequency SAR Data
    Eckardt, Robert
    Berger, Christian
    Thiel, Christian
    Schmullius, Christiane
    [J]. REMOTE SENSING, 2013, 5 (06) : 2973 - 3006
  • [7] Cloud Removal with Fusion of High Resolution Optical and SAR Images Using Generative Adversarial Networks
    Gao, Jianhao
    Yuan, Qiangqiang
    Li, Jie
    Zhang, Hai
    Su, Xin
    [J]. REMOTE SENSING, 2020, 12 (01)
  • [8] Glorot X., 2010, Understanding the difficulty of training deep feedforward neural networks
  • [9] Grohnfeldt C, 2018, INT GEOSCI REMOTE SE, P1726, DOI 10.1109/IGARSS.2018.8519215
  • [10] Hensel M, 2017, ADV NEUR IN, V30