Depth-Wise Separable Convolution Neural Network with Residual Connection for Hyperspectral Image Classification

被引:31
作者
Dang, Lanxue [1 ,2 ,3 ]
Pang, Peidong [1 ,2 ]
Lee, Jay [4 ,5 ]
机构
[1] Henan Univ, Sch Comp & Informat Engn, Kaifeng 475004, Peoples R China
[2] Henan Univ, Henan Key Lab Big Data Anal & Proc, Kaifeng 475004, Peoples R China
[3] Henan Univ, Henan Engn Lab Spatial Informat Proc, Kaifeng 475004, Peoples R China
[4] Henan Univ, Coll Environm & Planning, Kaifeng 475004, Peoples R China
[5] Kent State Univ, Dept Geog, Kent, OH 44240 USA
基金
中国国家自然科学基金;
关键词
convolution neural network; depth-wise separable convolution; residual unit; hyperspectral image classification; spatial-spectral features; SPECTRAL-SPATIAL CLASSIFICATION; FEATURE-SELECTION;
D O I
10.3390/rs12203408
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
The neural network-based hyperspectral images (HSI) classification model has a deep structure, which leads to the increase of training parameters, long training time, and excessive computational cost. The deepened network models are likely to cause the problem of gradient disappearance, which limits further improvement for its classification accuracy. To this end, a residual unit with fewer training parameters were constructed by combining the residual connection with the depth-wise separable convolution. With the increased depth of the network, the number of output channels of each residual unit increases linearly with a small amplitude. The deepened network can continuously extract the spectral and spatial features while building a cone network structure by stacking the residual units. At the end of executing the model, a 1 x 1 convolution layer combined with a global average pooling layer can be used to replace the traditional fully connected layer to complete the classification with reduced parameters needed in the network. Experiments were conducted on three benchmark HSI datasets: Indian Pines, Pavia University, and Kennedy Space Center. The overall classification accuracy was 98.85%, 99.58%, and 99.96% respectively. Compared with other classification methods, the proposed network model guarantees a higher classification accuracy while spending less time on training and testing sample sites.
引用
收藏
页码:1 / 20
页数:20
相关论文
共 31 条
  • [1] Abdel-Hamid O, 2012, INT CONF ACOUST SPEE, P4277, DOI 10.1109/ICASSP.2012.6288864
  • [2] [Anonymous], 2015, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2015.123
  • [3] [Anonymous], 2015, PROCIEEE CONFCOMPUT, DOI DOI 10.1109/CVPR.2015.7298594
  • [4] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [5] Donoho D. L., 2000, AMS math challenges lecture, V1, P32
  • [6] Fast Forward Feature Selection of Hyperspectral Images for Classification With Gaussian Mixture Models
    Fauvel, Mathieu
    Dechesne, Clement
    Zullo, Anthony
    Ferraty, Frederic
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2015, 8 (06) : 2824 - 2831
  • [7] Convolutional neural network for spectral-spatial classification of hyperspectral images
    Gao, Hongmin
    Yang, Yao
    Li, Chenming
    Zhang, Xiaoke
    Zhao, Jia
    Yao, Dan
    [J]. NEURAL COMPUTING & APPLICATIONS, 2019, 31 (12) : 8997 - 9012
  • [8] Rich feature hierarchies for accurate object detection and semantic segmentation
    Girshick, Ross
    Donahue, Jeff
    Darrell, Trevor
    Malik, Jitendra
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 580 - 587
  • [9] Han B, 2018, ADV NEUR IN, V31
  • [10] He MY, 2017, IEEE IMAGE PROC, P3904, DOI 10.1109/ICIP.2017.8297014