Study on transfer learning-based cross-spectral speckle image reconstruction method

被引:0
|
作者
Zhao, He [1 ]
Zhang, Yanzhu [1 ,2 ]
Wu, Hao [1 ]
Pu, Jixiong [3 ,4 ]
机构
[1] Shenyang Ligong Univ, Sch Automat & Elect Engn, Shenyang, Peoples R China
[2] Sci & Technol Electroopt Informat Secur Control La, Tianjin, Peoples R China
[3] Putian Univ, New Engn Ind Coll, Putian, Peoples R China
[4] Huaqiao Univ, Coll Informat Sci & Engn, Xiamen, Peoples R China
关键词
deep learning; speckle reconstruction; transfer learning; SCATTERING; LAYERS;
D O I
10.1088/1402-4896/ad37aa
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
In recent years, convolutional neural networks (CNNs) have been successfully applied to reconstruct images from speckle patterns generated as objects pass through scattering media. To achieve this objective, a large amount of data is collected for training the CNN. However, in certain cases, the characteristics of light passing through the scattering medium may vary. In such situations, it is necessary to collect a substantial amount of new data to re-train the CNN and achieve image reconstruction. To address this challenge, transfer learning techniques are introduced in this study. Specifically, we propose a novel Residual U-Net Generative Adversarial Network, denoted as ResU-GAN. The network is initially pre-trained using a large amount of data collected from either visible or non-visible light sources, and subsequently fine-tuned using a small amount of data collected from non-visible or visible light sources. Experimental results demonstrate the outstanding reconstruction performance of the ResU-GAN network. Furthermore, by combining transfer learning techniques, the network enables the reconstruction of speckle images across different datasets. The findings presented in this paper provide a more generalized approach for utilizing CNNs in cross-spectral speckle imaging.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Image Reconstruction from Optical speckle pattern based on deep learning
    Shen, Lihua
    Qi, Bote
    Chen, Rui-pin
    OPTOELECTRONIC IMAGING AND MULTIMEDIA TECHNOLOGY VIII, 2021, 11897
  • [42] Deep Learning-Based Application of Image Style Transfer
    Liao, Yimi
    Huang, Youfu
    Mathematical Problems in Engineering, 2022, 2022
  • [43] Deep Learning-Based Application of Image Style Transfer
    Liao, YiMi
    Huang, YouFu
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2022, 2022
  • [44] Study of three subsurface hydrologic systems based on spectral and cross-spectral analysis of time series
    Molénat, J
    Davy, P
    Gascuel-Odoux, C
    Durand, P
    JOURNAL OF HYDROLOGY, 1999, 222 (1-4) : 152 - 164
  • [45] Transfer learning-based convolutional neural network image recognition method for plant leaves
    Zhao Y.
    Zheng Y.
    Shi H.
    Zhang L.
    Zheng, Yili (zhengyili@bjfu.edu.cn), 1600, North Atlantic University Union NAUN (14): : 56 - 62
  • [46] Efficient Transfer Learning for Spectral Image Reconstruction from RGB Images
    Martinez, Emmanuel
    Castro, Santiago
    Bacca, Jorge
    Arguello, Henry
    2020 IEEE COLOMBIAN CONFERENCE ON APPLICATIONS OF COMPUTATIONAL INTELLIGENCE (IEEE COLCACI 2020), 2020,
  • [47] Complexities of deep learning-based undersampled MR image reconstruction
    Noordman, Constant Richard
    Yakar, Derya
    Bosma, Joeran
    Simonis, Frank Frederikus Jacobus
    Huisman, Henkjan
    EUROPEAN RADIOLOGY EXPERIMENTAL, 2023, 7 (01)
  • [48] Dictionary Learning-Based Image Reconstruction for Terahertz Computed Tomography
    Fasheng Zhong
    Liting Niu
    Weiwen Wu
    Fenglin Liu
    Journal of Infrared, Millimeter, and Terahertz Waves, 2021, 42 : 829 - 842
  • [49] Robustness Analysis for Deep Learning-Based Image Reconstruction Models
    Ayna, Cemre Omer
    Gurbuz, Ali Cafer
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 1428 - 1432
  • [50] Deep learning-based PET image denoising and reconstruction: a review
    Hashimoto, Fumio
    Onishi, Yuya
    Ote, Kibo
    Tashima, Hideaki
    Reader, Andrew J.
    Yamaya, Taiga
    RADIOLOGICAL PHYSICS AND TECHNOLOGY, 2024, 17 (01) : 24 - 46