Scene classification from remote sensing images using mid-level deep feature learning

被引:18
作者
Ni, Kang [1 ]
Wu, Yiquan [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Sch Elect & Informat Engn, Jiang Jun Rd 29, Nanjing 211106, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
BAG-OF-FEATURES; LAND-USE; OBJECT DETECTION; WORDS;
D O I
10.1080/01431161.2019.1667551
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Traditional remote sensing scene classification methods based on low-level local or global features easily lead to information loss, additionally, the influence of spatial correlation on scene images and the redundancy of feature representation are neglected. For overcoming these drawbacks, learnable multilayer energized locality constrained affine subspace coding (MELASC) - Convolutional Neural Network (CNN) framework (MELASC-CNN) which could generate orderless feature representation is proposed, and it considers both the diversity of local - global deep features and the redundancies of local geometric structure around visual words. Firstly, the energy of the basis is introduced to limit the number of neighbouring subspaces, moreover learnable locality-constrained affine subspace coding is presented for keeping the locality and sparsity of the corresponding coding vector, and otherwise, we utilize Gaussian Mixed Model (GMM) to improve the robustness of dictionary. Specifically, second-order coding based on information geometry is performed to further improve MELASC-CNN's performance; additionally, three kinds of proximity measures are proposed for describing closeness between features and affine subspaces. Finally, MELASC-CNN is built on the combination of the convolutional and fully connected layers for considering the global and local features. Simultaneously, MELASC-CNN extracts the feature vector at different resolutions through Spatial Pyramid Matching (SPM), and it integrates the spatial information into the final representation vector. For validation and comparison purposes, we conduct extensive experiments on two challenging high-resolution remote sensing datasets and show better performance than other related works.
引用
收藏
页码:1415 / 1436
页数:22
相关论文
共 47 条
  • [1] [Anonymous], 2010, Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, DOI DOI 10.1109/CVPR.2010.5540018
  • [2] [Anonymous], IEEE C COMP VIS PAT, DOI DOI 10.1109/CVPR.2006.68
  • [3] Scene Recognition From Optical Remote Sensing Images Using Mid-Level Deep Feature Mining
    Banerjee, Biplab
    Chaudhuri, Subhasis
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2018, 15 (07) : 1080 - 1084
  • [4] Fusing Local and Global Features for High-Resolution Scene Classification
    Bian, Xiaoyong
    Chen, Chen
    Tian, Long
    Du, Qian
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2017, 10 (06) : 2889 - 2901
  • [5] Deep Feature Fusion for VHR Remote Sensing Scene Classification
    Chaib, Souleyman
    Liu, Huan
    Gu, Yanfeng
    Yao, Hongxun
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2017, 55 (08): : 4775 - 4784
  • [6] Pyramid of Spatial Relatons for Scene-Level Land Use Classification
    Chen, Shizhi
    Tian, YingLi
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2015, 53 (04): : 1947 - 1957
  • [7] When Deep Learning Meets Metric Learning: Remote Sensing Image Scene Classification via Learning Discriminative CNNs
    Cheng, Gong
    Yang, Ceyuan
    Yao, Xiwen
    Guo, Lei
    Han, Junwei
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2018, 56 (05): : 2811 - 2821
  • [8] Remote Sensing Image Scene Classification Using Bag of Convolutional Features
    Cheng, Gong
    Li, Zhenpeng
    Yao, Xiwen
    Guo, Lei
    Wei, Zhongliang
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2017, 14 (10) : 1735 - 1739
  • [9] A survey on object detection in optical remote sensing images
    Cheng, Gong
    Han, Junwei
    [J]. ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2016, 117 : 11 - 28
  • [10] Csurka G., 2004, WORKSHOP STAT LEARNI, P1, DOI DOI 10.1234/12345678