Low-Rank Correlation Learning for Unsupervised Domain Adaptation

被引:4
作者
Lu, Yuwu [1 ]
Wong, Wai Keung [2 ,3 ]
Yuan, Chun [4 ,5 ]
Lai, Zhihui [6 ]
Li, Xuelong [7 ]
机构
[1] South China Normal Univ, Sch Software, Foshan 528225, Peoples R China
[2] Hong Kong Polytech Univ, Sch Fash & Text, Kowloon, Hong Kong, Peoples R China
[3] Lab Artificial Intelligence Design, Sci Pk, Hong Kong, Peoples R China
[4] Tsinghua Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[5] Peng Cheng Lab, Shenzhen 518055, Peoples R China
[6] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518055, Peoples R China
[7] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Task analysis; Correlation; Training data; Electronic mail; Noise measurement; Image color analysis; domain adaptation; image classification; low-rank; transfer learning; REGRESSION; NETWORKS;
D O I
10.1109/TMM.2023.3321430
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In unsupervised domain adaptation (UDA), negative transfer is one of the most challenging problems. Due to complex environments, the used domain data are always corrupted by noise or outliers in many applications. If the noisy data are directly used for domain adaptation, the disturbances and negative influence of the noise are also shifted for the target tasks. Thus, preventing disturbances and negative effects caused by noise are key problems in UDA that need to be addressed. In this article, a low-rank correlation learning (LRCL) method is proposed for UDA. In LRCL, the noisy domain data are recovered by low-rank learning; then both domain data are cleaned. Hence, the disturbances and negative effects of the noise are prevented. The maximized correlated features of the clean data from the source and target domains are learned by a novel correlation regularization term in a latent common space. LRCL also reduces the distribution difference of the learned clean source and target data by constructing a reconstruction term, in which the clean target data are linearly represented by the clean source data. To explore the temporal and structural information of the data, we further extend LRCL into a graph case and propose graph LRCL (GLRCL). Extensive experiments have been conducted on several public data benchmarks, and the experimental results demonstrate that our methods can effectively prevent negative transfer and obtain better classification outcomes than other compared approaches.
引用
收藏
页码:4153 / 4167
页数:15
相关论文
共 60 条
[1]  
[Anonymous], 2009, Tech. Rep. UILU-ENG-09-2215
[2]  
[Anonymous], Uci machine learning repository
[3]   Graph Regularized Nonnegative Matrix Factorization for Data Representation [J].
Cai, Deng ;
He, Xiaofei ;
Han, Jiawei ;
Huang, Thomas S. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (08) :1548-1560
[4]   Robust Principal Component Analysis? [J].
Candes, Emmanuel J. ;
Li, Xiaodong ;
Ma, Yi ;
Wright, John .
JOURNAL OF THE ACM, 2011, 58 (03)
[5]   Mutual Variational Inference: An Indirect Variational Inference Approach for Unsupervised Domain Adaptation [J].
Chen, Jiahong ;
Wang, Jing ;
de Silva, Clarence W. .
IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (11) :11491-11503
[6]   Subspace Distribution Adaptation Frameworks for Domain Adaptation [J].
Chen, Sentao ;
Han, Le ;
Liu, Xiaolan ;
He, Zongyao ;
Yang, Xiaowei .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (12) :5204-5218
[7]   Informative Feature Disentanglement for Unsupervised Domain Adaptation [J].
Deng, Wanxia ;
Zhao, Lingjun ;
Liao, Qing ;
Guo, Deke ;
Kuang, Gangyao ;
Hu, Dewen ;
Pietikainen, Matti ;
Liu, Li .
IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 :2407-2421
[8]   Deep Ladder-Suppression Network for Unsupervised Domain Adaptation [J].
Deng, Wanxia ;
Zhao, Lingjun ;
Kuang, Gangyao ;
Hu, Dewen ;
Pietikainen, Matti ;
Liu, Li .
IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (10) :10735-10749
[9]   Deep Transfer Low-Rank Coding for Cross-Domain Learning [J].
Ding, Zhengming ;
Fu, Yun .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (06) :1768-1779
[10]  
Ganin Y, 2017, ADV COMPUT VIS PATT, P189, DOI 10.1007/978-3-319-58347-1_10