A general framework for scalable transductive transfer learning

被引:24
作者
Bahadori, Mohammad Taha [1 ]
Liu, Yan [1 ]
Zhang, Dan [2 ]
机构
[1] Univ So Calif, Dept Comp Sci, Los Angeles, CA 90089 USA
[2] Purdue Univ, Dept Comp Sci, W Lafayette, IN 47907 USA
关键词
Transductive transfer learning; Large-margin approach; Rademacher complexity; Stochastic gradient descent;
D O I
10.1007/s10115-013-0647-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transductive transfer learning is one special type of transfer learning problem, in which abundant labeled examples are available in the source domain and only unlabeled examples are available in the target domain. It easily finds applications in spam filtering, microblogging mining, and so on. In this paper, we propose a general framework to solve the problem by mapping the input features in both the source domain and the target domain into a shared latent space and simultaneously minimizing the feature reconstruction loss and prediction loss. We develop one specific example of the framework, namely latent large-margin transductive transfer learning algorithm, and analyze its theoretic bound of classification loss via Rademacher complexity. We also provide a unified view of several popular transfer learning algorithms under our framework. Experiment results on one synthetic dataset and three application datasets demonstrate the advantages of the proposed algorithm over the other state-of-the-art ones.
引用
收藏
页码:61 / 83
页数:23
相关论文
共 45 条
[1]  
[Anonymous], P ANN M ASS COMP LIN
[2]  
[Anonymous], ADV NEUR INF PROC SY
[3]  
[Anonymous], STOCHASTIC GRADIENT
[4]  
[Anonymous], 2004, KERNEL METHODS PATTE
[5]  
[Anonymous], 2005, P INT WORKSH ART INT
[6]  
[Anonymous], P ADV NEUR INF PROC
[7]  
[Anonymous], 2006, EMNLP
[8]  
[Anonymous], 2010, A Survey on Transfer Learning
[9]  
[Anonymous], 1998, LEARNING LEARN, DOI DOI 10.1007/978-1-4615-5529-2_8
[10]  
[Anonymous], P ADV NEUR INF PROC