Extended Vision Transformer (ExViT) for Land Use and Land Cover Classification: A Multimodal Deep Learning Framework

被引:244
作者
Yao, Jing [1 ]
Zhang, Bing [1 ,2 ]
Li, Chenyu [3 ]
Hong, Danfeng [1 ]
Chanussot, Jocelyn [1 ,4 ]
机构
[1] Chinese Acad Sci, Aerosp Informat Res Inst, Beijing 100094, Peoples R China
[2] Univ Chinese Acad Sci, Coll Resources & Environm, Beijing 100049, Peoples R China
[3] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
[4] Univ Grenoble Alpes, GIPSA Lab, CNRS, Grenoble INP, F-38000 Grenoble, France
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2023年 / 61卷
关键词
Convolutional neural network (CNN); deep learning; hyperspectral; land use and land cover (LULC); light detection and ranging (LiDAR); multimodal image classification; synthetic aperture radar (SAR); vision transformer (ViT); NEURAL-NETWORKS; EXTRACTION; IMAGES;
D O I
10.1109/TGRS.2023.3284671
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
The recent success of attention mechanism-driven deep models, like vision transformer (ViT) as one of the most representatives, has intrigued a wave of advanced research to explore their adaptation to broader domains. However, current transformer-based approaches in the remote sensing (RS) community pay more attention to single-modality data, which might lose expandability in making full use of the ever-growing multimodal Earth observation data. To this end, we propose a novel multimodal deep learning framework by extending conventional ViT with minimal modifications, aiming at the task of land use and land cover (LULC) classification. Unlike common stems that adopt either linear patch projection or deep regional embedder, our approach processes multimodal RS image patches with parallel branches of position-shared ViTs extended with separable convolution modules, which offers an economical solution to leverage both spatial and modality-specific channel information. Furthermore, to promote information exchange across heterogeneous modalities, their tokenized embeddings are then fused through a cross-modality attention (CMA) module by exploiting pixel-level spatial correlation in RS scenes. Both of these modifications significantly improve the discriminative ability of classification tokens in each modality and thus further performance increase can be finally attained by a full token-based decision-level fusion module. We conduct extensive experiments on two multimodal RS benchmark datasets, i.e., the Houston2013 dataset containing hyperspectral (HS) and light detection and ranging (LiDAR) data, and Berlin dataset with HS and synthetic aperture radar (SAR) data, to demonstrate that our extended vision transformer (ExViT) outperforms concurrent competitors based on transformer or convolutional neural network (CNN) backbones, in addition to several competitive machine-learning-based models. The source codes and investigated datasets of this work will be made publicly available at https://github.com/jingyao16/ExViT.
引用
收藏
页数:15
相关论文
共 57 条
[1]  
ANDERSON J.R., 1976, US GEOLOGICAL SURVEY, P1
[2]   Random forest in remote sensing: A review of applications and future directions [J].
Belgiu, Mariana ;
Dragut, Lucian .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2016, 114 :24-31
[3]   Geographic object-based image analysis (GEOBIA): emerging trends and future opportunities [J].
Chen, Gang ;
Weng, Qihao ;
Hay, Geoffrey J. ;
He, Yinan .
GISCIENCE & REMOTE SENSING, 2018, 55 (02) :159-182
[4]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[5]   A COEFFICIENT OF AGREEMENT FOR NOMINAL SCALES [J].
COHEN, J .
EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1960, 20 (01) :37-46
[6]   Exploring Vision Transformers for Polarimetric SAR Image Classification [J].
Dong, Hongwei ;
Zhang, Lamei ;
Zou, Bin .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
[7]  
Dosovitskiy A., 2021, P 9 INT C LEARN REPR, P2021
[8]   Hyperspectral Data Classification Using Extended Extinction Profiles [J].
Ghamisi, Pedram ;
Souza, Roberto ;
Benediktsson, Jon Atli ;
Rittner, Leticia ;
Lotufo, Roberto ;
Zhu, Xiao Xiang .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2016, 13 (11) :1641-1645
[9]   Multimodal Classification of Remote Sensing Images: A Review and Future Directions [J].
Gomez-Chova, Luis ;
Tuia, Devis ;
Moser, Gabriele ;
Camps-Valls, Gustau .
PROCEEDINGS OF THE IEEE, 2015, 103 (09) :1560-1584
[10]   OpenStreetMap: User-Generated Street Maps [J].
Haklay, Mordechai ;
Weber, Patrick .
IEEE PERVASIVE COMPUTING, 2008, 7 (04) :12-18