Domain Space Transfer Extreme Learning Machine for Domain Adaptation

被引:95
作者
Chen, Yiming [1 ]
Song, Shiji [1 ]
Li, Shuang [1 ]
Yang, Le [1 ]
Wu, Cheng [1 ]
机构
[1] Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain adaptation; extreme learning machine (ELM); maximum mean discrepancy (MMD); space learning; NEURAL-NETWORKS; MODEL; SVM; ELM;
D O I
10.1109/TCYB.2018.2816981
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Extreme learning machine (ELM) has been applied in a wide range of classification and regression problems due to its high accuracy and efficiency. However, ELM can only deal with cases where training and testing data are from identical distribution, while in real world situations, this assumption is often violated. As a result, ELM performs poorly in domain adaptation problems, in which the training data (source domain) and testing data (target domain) are differently distributed but somehow related. In this paper, an ELM-based space learning algorithm, domain space transfer ELM (DST-ELM), is developed to deal with unsupervised domain adaptation problems. To be specific, through DST-ELM, the source and target data are reconstructed in a domain invariant space with target data labels unavailable. Two goals are achieved simultaneously. One is that, the target data are input into an ELM-based feature space learning network, and the output is supposed to approximate the input such that the target domain structural knowledge and the intrinsic discriminative information can be preserved as much as possible. The other one is that, the source data are projected into the same space as the target data and the distribution distance between the two domains is minimized in the space. This unsupervised feature transformation network is followed by an adaptive ELM classifier which is trained from the transferred labeled source samples, and is used for target data label prediction. Moreover, the ELMs in the proposed method, including both the space learning ELM and the classifier, require just a small number of hidden nodes, thus maintaining low computation complexity. Extensive experiments on real-world image and text datasets are conducted and verify that our approach outperforms several existing domain adaptation methods in terms of accuracy while maintaining high efficiency.
引用
收藏
页码:1909 / 1922
页数:14
相关论文
共 57 条
  • [1] [Anonymous], 2010, Advances in neural information processing systems
  • [2] [Anonymous], 2007, P 45 ANN M ASS COMP
  • [3] Blitzer J, 2006, Proc. Of the Conference on Empirical Methods in Natural Language Processing, P120
  • [4] A review on neural networks with random weights
    Cao, Weipeng
    Wang, Xizhao
    Ming, Zhong
    Gao, Jinzhu
    [J]. NEUROCOMPUTING, 2018, 275 : 278 - 287
  • [5] A Practical Transfer Learning Algorithm for Face Verification
    Cao, Xudong
    Wipf, David
    Wen, Fang
    Duan, Genquan
    Sun, Jian
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 3208 - 3215
  • [6] Cawley GC, 2010, J MACH LEARN RES, V11, P2079
  • [7] LIBSVM: A Library for Support Vector Machines
    Chang, Chih-Chung
    Lin, Chih-Jen
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
  • [8] Standard Plane Localization in Fetal Ultrasound via Domain Transferred Deep Neural Networks
    Chen, Hao
    Ni, Dong
    Qin, Jing
    Li, Shengli
    Yang, Xin
    Wang, Tianfu
    Heng, Pheng Ann
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2015, 19 (05) : 1627 - 1636
  • [9] Chen MM, 2015, J MACH LEARN RES, V16, P3849
  • [10] Sparse Autoencoder-based Feature Transfer Learning for Speech Emotion Recognition
    Deng, Jun
    Zhang, Zixing
    Marchi, Erik
    Schuller, Bjoern
    [J]. 2013 HUMAINE ASSOCIATION CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2013, : 511 - 516