Learning a Multi-Branch Neural Network from Multiple Sources for Knowledge Adaptation in Remote Sensing Imagery

被引:31
作者
Al Rahhal, Mohamad M. [1 ]
Bazi, Yakoub [2 ]
Abdullah, Taghreed [3 ]
Mekhalfi, Mohamed L. [4 ]
AlHichri, Haikel [2 ]
Zuair, Mansour [2 ]
机构
[1] King Saud Univ, Dept Informat Sci, Coll Appl Comp Sci, Riyadh 11543, Saudi Arabia
[2] King Saud Univ, Dept Comp Engn, Coll Comp & Informat Sci, Riyadh 11543, Saudi Arabia
[3] Univ Mysore, Dept Studies Comp Sci, Mysore 570006, Karnataka, India
[4] Univ Batna, Fac Technol, Dept Elect, Batna 05000, Algeria
关键词
scene classification; multiple sources; multiple domain shifts; multi-branch neural network; SCENE CLASSIFICATION; DOMAIN ADAPTATION; LAND-COVER;
D O I
10.3390/rs10121890
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
In this paper we propose a multi-branch neural network, called MB-Net, for solving the problem of knowledge adaptation from multiple remote sensing scene datasets acquired with different sensors over diverse locations and manually labeled with different experts. Our aim is to learn invariant feature representations from multiple source domains with labeled images and one target domain with unlabeled images. To this end, we define for MB-Net an objective function that mitigates the multiple domain shifts at both feature representation and decision levels, while retaining the ability to discriminate between different land-cover classes. The complete architecture is trainable end-to-end via the backpropagation algorithm. In the experiments, we demonstrate the effectiveness of the proposed method on a new multiple domain dataset created from four heterogonous scene datasets well known to the remote sensing community, namely, the University of California (UC-Merced) dataset, the Aerial Image dataset (AID), the PatternNet dataset, and the Northwestern Polytechnical University (NWPU) dataset. In particular, this method boosts the average accuracy over all transfer scenarios up to 89.05% compared to standard architecture based only on cross-entropy loss, which yields an average accuracy of 78.53%.
引用
收藏
页数:18
相关论文
共 66 条
[1]   Asymmetric Adaptation of Deep Features for Cross-Domain Classification in Remote Sensing Imagery [J].
Ammour, Nassim ;
Bashmal, Laila ;
Bazi, Yakoub ;
Al Rahhal, M. M. ;
Zuair, Mansour .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2018, 15 (04) :597-601
[2]   Siamese-GAN: Learning Invariant Representations for Aerial Vehicle Image Categorization [J].
Bashmal, Laila ;
Bazi, Yakoub ;
AlHichri, Haikel ;
AlRahhal, Mohamad M. ;
Ammour, Nassim ;
Alajlan, Naif .
REMOTE SENSING, 2018, 10 (02)
[3]  
Blitzer J., 2007, P ACL 2007 45 ANN M
[4]   Multisource Domain Adaptation and Its Application to Early Detection of Fatigue [J].
Chattopadhyay, Rita ;
Sun, Qian ;
Fan, Wei ;
Davidson, Ian ;
Panchanathan, Sethuraman ;
Ye, Jieping .
ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2012, 6 (04)
[5]   Land-use scene classification using multi-scale completed local binary patterns [J].
Chen, Chen ;
Zhang, Baochang ;
Su, Hongjun ;
Li, Wei ;
Wang, Lu .
SIGNAL IMAGE AND VIDEO PROCESSING, 2016, 10 (04) :745-752
[6]   Gabor-Filtering-Based Completed Local Binary Patterns for Land-Use Scene Classification [J].
Chen, Chen ;
Zhou, Libing ;
Guo, Jianzhong ;
Li, Wei ;
Su, Hongjun ;
Guo, Fangda .
2015 1ST IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM), 2015, :324-329
[7]  
Chen Q, 2015, PROC CVPR IEEE, P5315, DOI 10.1109/CVPR.2015.7299169
[8]   Pyramid of Spatial Relatons for Scene-Level Land Use Classification [J].
Chen, Shizhi ;
Tian, YingLi .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2015, 53 (04) :1947-1957
[9]   Enhancing Land Cover Mapping through Integration of Pixel-Based and Object-Based Classifications from Remotely Sensed Imagery [J].
Chen, Yuehong ;
Zhou, Ya'nan ;
Ge, Yong ;
An, Ru ;
Chen, Yu .
REMOTE SENSING, 2018, 10 (01)
[10]   Remote Sensing Image Scene Classification Using Bag of Convolutional Features [J].
Cheng, Gong ;
Li, Zhenpeng ;
Yao, Xiwen ;
Guo, Lei ;
Wei, Zhongliang .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2017, 14 (10) :1735-1739