Learning-Shared Cross-Modality Representation Using Multispectral-LiDAR and Hyperspectral Data

被引:52
作者
Hong, Danfeng [1 ,2 ]
Chanussot, Jocelyn [3 ,4 ]
Yokoya, Naoto [5 ]
Kang, Jian [2 ]
Zhu, Xiao Xiang [1 ,2 ]
机构
[1] German Aerosp Ctr DLR, Remote Sensing Technol Inst IMF, D-82234 Wessling, Germany
[2] Tech Univ Munich, Signal Proc Earth Observat SiPEO, D-80333 Munich, Germany
[3] Univ Grenoble Alpes, INRIA, CNRS, Grenoble INP,LJK, F-38000 Grenoble, France
[4] Univ Iceland, Fac Elect & Comp Engn, IS-101 Reykjavik, Iceland
[5] RIKEN, RIKEN Ctr Adv Intelligence Project AIP, Geoinformat Unit, Tokyo 1030027, Japan
基金
日本学术振兴会; 欧洲研究理事会;
关键词
Laser radar; Hyperspectral imaging; Training; Feature extraction; Earth; Data models; Cross-modality; feature learning; hyperspectral; multimodality; multispectral-Light Detection and Ranging (LIDAR); shared subspace learning; DATA FUSION; MANIFOLD ALIGNMENT;
D O I
10.1109/LGRS.2019.2944599
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Due to the ever-growing diversity of the data source, multimodality feature learning has attracted more and more attention. However, most of these methods are designed by jointly learning feature representation from multimodalities that exist in both training and test sets, yet they are less investigated in the absence of certain modality in the test phase. To this end, in this letter, we propose to learn a shared feature space across multimodalities in the training process. By this way, the out-of-sample from any of multimodalities can be directly projected onto the learned space for a more effective cross-modality representation. More significantly, the shared space is regarded as a latent subspace in our proposed method, which connects the original multimodal samples with label information to further improve the feature discrimination. Experiments are conducted on the multispectral-Light Detection and Ranging (LIDAR) and hyperspectral data set provided by the 2018 IEEE GRSS Data Fusion Contest to demonstrate the effectiveness and superiority of the proposed method in comparison with several popular baselines.
引用
收藏
页码:1470 / 1474
页数:5
相关论文
共 16 条
[1]   Domain Adaptation in the Absence of Source Domain Labeled Samples-A Coclustering-Based Approach [J].
Banerjee, Biplab ;
Buddhiraju, Krishna Mohan .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2016, 13 (12) :1905-1909
[2]  
Cai D, 2007, IEEE DATA MINING, P73, DOI 10.1109/ICDM.2007.89
[3]   Multi-Task Learning for Blind Source Separation [J].
Du, Bo ;
Wang, Shaodong ;
Xu, Chang ;
Wang, Nan ;
Zhang, Liangpei ;
Tao, Dacheng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (09) :4219-4231
[4]  
He XF, 2004, ADV NEUR IN, V16, P153
[5]   CoSpace: Common Subspace Learning From Hyperspectral-Multispectral Correspondences [J].
Hong, Danfeng ;
Yokoya, Naoto ;
Chanussot, Jocelyn ;
Zhu, Xiao Xiang .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (07) :4349-4359
[6]   Learnable manifold alignment (LeMA): A semi-supervised cross-modality learning framework for land cover and land use classification [J].
Hong, Danfeng ;
Yokoya, Naoto ;
Ge, Nan ;
Chanussot, Jocelyn ;
Zhu, Xiao Xiang .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2019, 147 :193-205
[7]   SULoRA: Subspace Unmixing With Low-Rank Attribute Embedding for Hyperspectral Data Analysis [J].
Hong, Danfeng ;
Zhu, Xiao Xiang .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, 12 (06) :1351-1363
[8]   An Augmented Linear Mixing Model to Address Spectral Variability for Hyperspectral Unmixing [J].
Hong, Danfeng ;
Yokoya, Naoto ;
Chanussot, Jocelyn ;
Zhu, Xiao Xiang .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (04) :1923-1938
[9]   Learning a Robust Local Manifold Representation for Hyperspectral Dimensionality Reduction [J].
Hong, Danfeng ;
Yokoya, Naoto ;
Zhu, Xiao Xiang .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2017, 10 (06) :2960-2975
[10]   A novel hierarchical approach for multispectral palmprint recognition [J].
Hong, Danfeng ;
Liu, Wanquan ;
Su, Jian ;
Pan, Zhenkuan ;
Wang, Guodong .
NEUROCOMPUTING, 2015, 151 :511-521