Exploring Cross-Domain Pretrained Model for Hyperspectral Image Classification

被引:17
作者
Lee, Hyungtae [1 ]
Eum, Sungmin [1 ,2 ]
Kwon, Heesung [1 ]
机构
[1] Army Res Lab, Computat & Informat Sci Directorate CISD, Intelligent Percept Branch, Adelphi, MD 20783 USA
[2] Booz Allen Hamilton Inc, Mclean, VA 22102 USA
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2022年 / 60卷
关键词
Hyperspectral imaging; Training; Task analysis; Data models; Convolutional neural networks; Data analysis; Analytical models; Cross domain; hyperspectral image classification; pretrain-finetune strategy; CONVOLUTIONAL NEURAL-NETWORK; CNN;
D O I
10.1109/TGRS.2022.3165441
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
A pretrain-finetune strategy is widely used to reduce the overfitting that can occur when data are insufficient for convolutional neural network (CNN) training. The first few layers of a CNN pretrained on a large-scale RGB dataset are capable of acquiring general image characteristics, which are remarkably effective in tasks targeted for different RGB datasets. However, when it comes down to the hyperspectral domain where each domain has its unique spectral properties, the pretrain-finetune strategy no longer can be deployed in a conventional way while presenting three major issues: 1) inconsistent spectral characteristics among the domains (e.g., frequency range); 2) inconsistent number of data channels among the domains; and 3) absence of large-scale hyperspectral dataset. We seek to train a universal cross-domain model, which can later be deployed for various spectral domains. To achieve, we physically furnish multiple inlets to the model while having a universal portion, which is designed to handle the inconsistent spectral characteristics among different domains. Note that only the universal portion is used in the finetune process. This approach naturally enables the learning of our model on multiple domains simultaneously, which acts as an effective workaround for the issue of the absence of large-scale dataset. We have carried out a study to extensively compare models that were trained using cross-domain approach with ones trained from scratch. Our approach was found to be superior both in accuracy and training efficiency. In addition, we have verified that our approach effectively reduces the overfitting issue, enabling us to deepen the model up to 13 layers (from 9) without compromising the accuracy.
引用
收藏
页数:12
相关论文
共 73 条
  • [1] [Anonymous], 2017, PROC 25 C GRETSI
  • [2] [Anonymous], 2018, IEEE T GEOSCI REMOTE, DOI DOI 10.1109/TGRS.2018.2838665
  • [3] Audebert N., 2019, DEEPHYPERX
  • [4] 3-D Deep Learning Approach for Remote Sensing Image Classification
    Ben Hamida, Amina
    Benoit, Alexandre
    Lambert, Patrick
    Ben Amar, Chokri
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2018, 56 (08): : 4420 - 4434
  • [5] Weakly Supervised Localization Using Deep Feature Maps
    Bency, Archith John
    Kwon, Heesung
    Lee, Hyungtae
    Karthikeyan, S.
    Manjunath, B. S.
    [J]. COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 : 714 - 731
  • [6] Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network
    Cao, Xiangyong
    Zhou, Feng
    Xu, Lin
    Meng, Deyu
    Xu, Zongben
    Paisley, John
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (05) : 2354 - 2367
  • [7] Caron M, 2020, ADV NEUR IN, V33
  • [8] Chen T, 2020, PR MACH LEARN RES, V119
  • [9] Exploring Simple Siamese Representation Learning
    Chen, Xinlei
    He, Kaiming
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15745 - 15753
  • [10] An Empirical Study of Training Self-Supervised Vision Transformers
    Chen, Xinlei
    Xie, Saining
    He, Kaiming
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9620 - 9629