Evaluating the potential of pyramid-based fusion coupled with convolutional neural network for satellite image classification

被引:2
作者
Achala Shakya
Mantosh Biswas
Mahesh Pal
机构
[1] National Institute of Technology,Computer Engineering Department
[2] National Institute of Technology,Civil Engineering Department
关键词
Remote sensing; Image pyramid; Fusion; Classification; Convolutional neural network (CNN); Multi-scale decomposition;
D O I
10.1007/s12517-022-09677-0
中图分类号
学科分类号
摘要
Deep learning (DL)-based methods have recently been extensively used for satellite image analysis due to their ability to automatically extract spatial-spectral features from images. Recent advancement in DL-based methods has also allowed the remote sensing community to utilize these methods in fusing the satellite images for enhanced land use/land cover (LULC) classification. Keeping this in view, the present study aims to evaluate the potential of SAR (Sentinel 1) and Optical (Sentinel 2) image fusion using pyramid-based DL methods over an agricultural area in India. In this study, three image fusion methods, i.e., pyramid-based fusion methods, pyramid-based fusion methods coupled with convolutional neural network (CNN), and a combination of two different pyramid decomposition methods concurrently with CNN were used. The performance of the fused images was evaluated in terms of fusion metrics, image quality, and overall classification accuracy by an optimized 2D-CNN-based DL classifier. Results from pyramid-based fusion methods with CNN and a combination of two different pyramid decomposition methods with CNN suggest that these methods were able to retain visual quality as well as the detailed structural information of input images in comparison to the pyramid-based fusion methods. Bayesian optimization method was used to optimize various hyper-parameters of the 2D-CNN-based DL classifier used in this study. Results with fused images obtained using pyramid-based methods coupled with CNN suggest an improved performance by VV (Vertical–Vertical) polarized images in terms of overall classification accuracy (99.23% and 99.33%).
引用
收藏
相关论文
共 223 条
  • [1] Abdikan S(2014)A comparative data-fusion analysis of multi-sensor satellite images Int J Digital Earth 7 671-687
  • [2] Balik Sanli F(2021)Urban Vegetation Mapping from Aerial Imagery Using Explainable AI (XAI) Sensors 21 4738-64392
  • [3] Sunar F(2021)Improving Road Semantic Segmentation Using Generative Adversarial Network IEEE Access 9 64381-100
  • [4] Ehlers M(2018)Application of fractional-order differentiation in multispectral image fusion Remote Sens Lett 9 91-4949
  • [5] Abdollahi A(2018)M^3\text{Fusion}: A Deep Learning Architecture for Multiscale Multimodal Multitemporal Satellite Data Fusion IEEE J Sel Top Appl Earth Obs Remote Sens 11 4939-22
  • [6] Pradhan B(2019)Polarimetric synthetic aperture radar speckle filtering by multiscale edge detection J Appl Rem Sens 13 1-2220
  • [7] Abdollahi A(2013)An Area-Based Image Fusion Scheme for the Integration of SAR and Optical Satellite Imagery IEEE J Sel Top Appl Earth Obs Remote Sens 6 2212-2367
  • [8] Pradhan B(2018)Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network IEEE Trans Image Process 27 2354-7066
  • [9] Sharma G(2019)Automatic Design of Convolutional Neural Network for Hyperspectral Image Classification IEEE Trans Geosci Remote Sens 57 7048-297
  • [10] Azarang A(2014)Sparse Representation Based Pansharpening Using Trained Dictionary IEEE Geosci Remote Sens Lett 11 293-726