Center-bridged Interaction Fusion for hyperspectral and LiDAR classification

被引:0
作者
Huo, Lu [1 ]
Xia, Jiahao [1 ]
Zhang, Leijie [1 ]
Zhang, Haimin [1 ]
Xu, Min [1 ]
机构
[1] Univ Technol Sydney, Fac Engn & IT, Sch Elect & Data Engn, Sydney, NSW 2007, Australia
基金
澳大利亚研究理事会;
关键词
Hyperspectral Image classification; Light detection and ranging; Multi-sensor; Transformer; Cross-modal attention; IMAGE CLASSIFICATION;
D O I
10.1016/j.neucom.2024.127757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent classifications in Earth Observation (EO) commonly involve a combination of Hyperspectral Image (HSI) and Light Detection and Ranging (LiDAR) signals. However, many current methods fail to consider the HSILiDAR information concurrently, especially in terms of both its intraand inter -modality aspects. Additionally, current methods are generally limited in their ability to fuse the features extracted from different modalities. Hence, this paper proposes a center -bridged framework, called Interaction Fusion (IF), that can leverage diverse information concerning the intraand inter -modality relationships at the same time. More specifically, intraand inter -modality information can be enriched by introducing the center patch of HSI (cp-HSI) as an extra input, This introduces additional contextual information within and across modalities that can be leverage for deeper insights. Further, we propose a fusion matrix as a structural feature map designed to integrate nine views generated by a view generator, enabling the adaptive combination of intraand inter -modality information. Overall, our approach allows potential patterns to be captured, while mitigating any bias resulting from incomplete information. Extensive experiments conducted on three widely recognized datasets - Trento, MUUFL, and Houston - demonstrate that the IF framework achieves state-of-the-art results, surpassing existing methods.
引用
收藏
页数:11
相关论文
共 51 条
[1]   Semantic modelling of Earth Observation remote sensing [J].
Aldana-Martin, Jose F. ;
Garcia-Nieto, Jose ;
del Mar Roldan-Garcia, Maria ;
Aldana-Montes, Jose F. .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 187
[2]  
Amirian Soheyla, 2021, arXiv
[3]   Methodology for hyperspectral band selection [J].
Bajcsy, P ;
Groves, P .
PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, 2004, 70 (07) :793-802
[4]   Transformer Interpretability Beyond Attention Visualization [J].
Chefer, Hila ;
Gur, Shir ;
Wolf, Lior .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :782-791
[5]   Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks [J].
Chen, Yushi ;
Jiang, Hanlu ;
Li, Chunyang ;
Jia, Xiuping ;
Ghamisi, Pedram .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2016, 54 (10) :6232-6251
[6]   Attentional Feature Fusion [J].
Dai, Yimian ;
Gieseke, Fabian ;
Oehmcke, Stefan ;
Wu, Yiquan ;
Barnard, Kobus .
2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, :3559-3568
[7]   Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest [J].
Debes, Christian ;
Merentitis, Andreas ;
Heremans, Roel ;
Hahn, Juergen ;
Frangiadakis, Nikolaos ;
van Kasteren, Tim ;
Liao, Wenzhi ;
Bellens, Rik ;
Pizurica, Aleksandra ;
Gautama, Sidharta ;
Philips, Wilfried ;
Prasad, Saurabh ;
Du, Qian ;
Pacifici, Fabio .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2014, 7 (06) :2405-2418
[8]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, 10.48550/arXiv.2010.11929, DOI 10.48550/ARXIV.2010.11929]
[9]   PLTD: Patch-Based Low-Rank Tensor Decomposition for Hyperspectral Images [J].
Du, Bo ;
Zhang, Mengfei ;
Zhang, Lefei ;
Hu, Ruimin ;
Tao, Dacheng .
IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (01) :67-79
[10]   Cloud detection algorithm comparison and validation for operational Landsat data products [J].
Foga, Steve ;
Scaramuzza, Pat L. ;
Guo, Song ;
Zhu, Zhe ;
Dilley, Ronald D., Jr. ;
Beckmann, Tim ;
Schmidt, Gail L. ;
Dwyer, John L. ;
Hughes, M. Joseph ;
Laue, Brady .
REMOTE SENSING OF ENVIRONMENT, 2017, 194 :379-390