Vision Transformers for Remote Sensing Image Classification

被引:357
作者
Bazi, Yakoub [1 ]
Bashmal, Laila [1 ]
Rahhal, Mohamad M. Al [2 ]
Dayil, Reham Al [1 ]
Ajlan, Naif Al [1 ]
机构
[1] King Saud Univ, Coll Comp & Informat Sci, Comp Engn Dept, Riyadh 11543, Saudi Arabia
[2] King Saud Univ, Coll Appl Comp Sci, Appl Comp Sci Dept, Riyadh 11543, Saudi Arabia
关键词
remote sensing; image level classification; vision transformers; multihead attention; data augmentation; NEURAL-NETWORKS; SCENE CLASSIFICATION; ATTENTION;
D O I
10.3390/rs13030516
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
In this paper, we propose a remote-sensing scene-classification method based on vision transformers. These types of networks, which are now recognized as state-of-the-art models in natural language processing, do not rely on convolution layers as in standard convolutional neural networks (CNNs). Instead, they use multihead attention mechanisms as the main building block to derive long-range contextual relation between pixels in images. In a first step, the images under analysis are divided into patches, then converted to sequence by flattening and embedding. To keep information about the position, embedding position is added to these patches. Then, the resulting sequence is fed to several multihead attention layers for generating the final representation. At the classification stage, the first token sequence is fed to a softmax classification layer. To boost the classification performance, we explore several data augmentation strategies to generate additional data for training. Moreover, we show experimentally that we can compress the network by pruning half of the layers while keeping competing classification accuracies. Experimental results conducted on different remote-sensing image datasets demonstrate the promising capability of the model compared to state-of-the-art methods. Specifically, Vision Transformer obtains an average classification accuracy of 98.49%, 95.86%, 95.56% and 93.83% on Merced, AID, Optimal31 and NWPU datasets, respectively. While the compressed version obtained by removing half of the multihead attention layers yields 97.90%, 94.27%, 95.30% and 93.05%, respectively.
引用
收藏
页码:1 / 20
页数:19
相关论文
共 60 条
[1]   Face description with local binary patterns:: Application to face recognition [J].
Ahonen, Timo ;
Hadid, Abdenour ;
Pietikainen, Matti .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2006, 28 (12) :2037-2041
[2]   Land Cover Classification from fused DSM and UAV Images Using Convolutional Neural Networks [J].
Al-Najjar, Husam A. H. ;
Kalantar, Bahareh ;
Pradhan, Biswajeet ;
Saeidi, Vahideh ;
Halin, Alfian Abdul ;
Ueda, Naonori ;
Mansor, Shattri .
REMOTE SENSING, 2019, 11 (12)
[3]   Simple Yet Effective Fine-Tuning of Deep CNNs Using an Auxiliary Classification Loss for Remote Sensing Scene Classification [J].
Bazi, Yakoub ;
Al Rahhal, Mohamad M. ;
Alhichri, Haikel ;
Alajlan, Naif .
REMOTE SENSING, 2019, 11 (24)
[4]  
Bazi Y, 2019, INT GEOSCI REMOTE SE, P2443, DOI [10.1109/igarss.2019.8898895, 10.1109/IGARSS.2019.8898895]
[5]   Attention Augmented Convolutional Networks [J].
Bello, Irwan ;
Zoph, Barret ;
Vaswani, Ashish ;
Shlens, Jonathon ;
Le, Quoc V. .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3285-3294
[6]   A Multiple-Instance Densely-Connected ConvNet for Aerial Scene Classification [J].
Bi, Qi ;
Qin, Kun ;
Li, Zhili ;
Zhang, Han ;
Xu, Kai ;
Xia, Gui-Song .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4911-4926
[7]  
Bowles C., 2018, CoRR
[8]  
Chen M, 2020, PR MACH LEARN RES, V119
[9]   Non-Autoregressive Transformer for Speech Recognition [J].
Chen, Nanxin ;
Watanabe, Shinji ;
Villalba, Jesus ;
Zelasko, Piotr ;
Dehak, Najim .
IEEE SIGNAL PROCESSING LETTERS, 2021, 28 :121-125
[10]   Deep Learning-Based Classification of Hyperspectral Data [J].
Chen, Yushi ;
Lin, Zhouhan ;
Zhao, Xing ;
Wang, Gang ;
Gu, Yanfeng .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2014, 7 (06) :2094-2107