Generative Adversarial Capsule Network With ConvLSTM for Hyperspectral Image Classification

被引:27
作者
Wang, Wei-Ye [1 ]
Li, Heng-Chao [1 ]
Deng, Yang-Jun [1 ]
Shao, Li-Yang [2 ]
Lu, Xiao-Qiang [3 ,4 ]
Du, Qian [5 ]
机构
[1] Southwest Jiaotong Univ, Sichuan Prov Key Lab Informat Coding & Transmiss, Chengdu 610031, Peoples R China
[2] Southern Univ Sci & Technol, Dept Elect & Elect Engn, Shenzhen 518055, Peoples R China
[3] Chinese Acad Sci, Key Lab Spectral Imaging Technol, Xian 710119, Peoples R China
[4] Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710119, Peoples R China
[5] Mississippi State Univ, Dept Elect & Comp Engn, Starkville, MS 39762 USA
基金
中国国家自然科学基金;
关键词
Capsule network (CapsNet); convolutional neural network (CNN); data augmentation; deep learning; generative adversarial network (GAN); hyperspectral image (HSI) classification;
D O I
10.1109/LGRS.2020.2976482
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Recently, deep learning has been widely applied in hyperspectral image (HSI) classification since it can extract high-level spatial-spectral features. However, deep learning methods are restricted due to the lack of sufficient annotated samples. To address this problem, this letter proposes a novel generative adversarial network (GAN) for HSI classification that can generate artificial samples for data augmentation to improve the HSI classification performance with few training samples. In the proposed network, a new discriminator is designed by exploiting capsule network (CapsNet) and convolutional long short-term memory (ConvLSTM), which extracts the low-level features and combines them together with local space sequence information to form the high-level contextual features. In addition, a structured sparse L-2(,1) constraint is imposed on sample generation to control the modes of data being generated and achieve more stable training. The experimental results on two real HSI data sets show that the proposed method can obtain better classification performance than the several state-of-the-art deep classification methods.
引用
收藏
页码:523 / 527
页数:5
相关论文
共 17 条
[1]  
[Anonymous], 2016, ARXIV161009585
[2]   Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks [J].
Chen, Yushi ;
Jiang, Hanlu ;
Li, Chunyang ;
Jia, Xiuping ;
Ghamisi, Pedram .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2016, 54 (10) :6232-6251
[3]   Fusion of Multiple Edge-Preserving Operations for Hyperspectral Image Classification [J].
Duan, Puhong ;
Kang, Xudong ;
Li, Shutao ;
Ghamisi, Pedram ;
Benediktsson, Jon Atli .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (12) :10336-10349
[4]   Noise-Robust Hyperspectral Image Classification via Multi-Scale Total Variation [J].
Duan, Puhong ;
Kang, Xudong ;
Li, Shutao ;
Ghamisi, Pedram .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2019, 12 (06) :1948-1962
[5]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[6]   Investigation of the random forest framework for classification of hyperspectral data [J].
Ham, J ;
Chen, YC ;
Crawford, MM ;
Ghosh, J .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2005, 43 (03) :492-501
[7]   Cascaded Recurrent Neural Networks for Hyperspectral Image Classification [J].
Hang, Renlong ;
Liu, Qingshan ;
Hong, Danfeng ;
Ghamisi, Pedram .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (08) :5384-5394
[8]   Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification [J].
He, Zhi ;
Liu, Han ;
Wang, Yiwen ;
Hu, Jie .
REMOTE SENSING, 2017, 9 (10)
[9]   An Augmented Linear Mixing Model to Address Spectral Variability for Hyperspectral Unmixing [J].
Hong, Danfeng ;
Yokoya, Naoto ;
Chanussot, Jocelyn ;
Zhu, Xiao Xiang .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (04) :1923-1938
[10]   Classification of Hyperspectral Imagery Using a New Fully Convolutional Neural Network [J].
Li, Jiaojiao ;
Zhao, Xi ;
Li, Yunsong ;
Du, Qian ;
Xi, Bobo ;
Hu, Jing .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2018, 15 (02) :292-296