Cross-domain knowledge collaboration for blending-target domain adaptation

被引:3
作者
Zhang, Bo [1 ]
Zhang, Xiaoming [2 ]
Huang, Feiran [3 ]
Miao, Dezhuang [2 ]
机构
[1] Nanjing Normal Univ, Sch Comp & Elect Informat, Sch Artificial Intelligence, Nanjing 210023, Peoples R China
[2] Beihang Univ, Sch Cyber Sci & Technol, Beijing 100191, Peoples R China
[3] Jinan Univ, Coll Cyber Secur, Guangzhou 510632, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain adaptation; Blending-target domain adaptation; Moment matching; Knowledge collaboration;
D O I
10.1016/j.ipm.2024.103730
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unsupervised domain adaptation (UDA) avoids expensive data annotation for the unlabeled target domains by fully utilizing the knowledge of existing source domain. In practice, the target data are usually highly heterogeneous that mix multiple latent domains. And sometimes the source data involve the user private information which is forbidden to access directly. To this end, this paper tackles the blending-target data under the source-available setting, and for the first time under the source-free setting. Specifically, we devise a novel Cross-domain Knowledge Collaboration (CdKC) framework, which mainly includes a prediction network and an adaptation network. The complementarity of two networks are exploited to explore the intrinsic structure in targets. Specifically, CdKC is capable of learning domain-invariant space, and disentangling domain-specific feature simultaneously to boost the UDA performance greatly. A total of 12 tasks are conducted on three visual datasets to verify the superior performance of CdKC by comparing with the state-of-the-art models designed under 4 different UDA settings. The experiments demonstrate that the accuracy of CdKC model still exceeds D-CGCT 0.4% on Office dataset and 1.2% on Office-Home dataset although D-CGCT can access source data and domain labels of targets that CdKC cannot do. It verifies the effectiveness of CdKC although under much looser source and target domain restrictions.
引用
收藏
页数:16
相关论文
共 53 条
[1]   Improving sentiment domain adaptation for Arabic using an unsupervised self-labeling framework [J].
Alqahtani, Yathrib ;
Al-Twairesh, Nora ;
Alsanad, Ahmed .
INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (03)
[2]   Integrating structured biological data by Kernel Maximum Mean Discrepancy [J].
Borgwardt, Karsten M. ;
Gretton, Arthur ;
Rasch, Malte J. ;
Kriegel, Hans-Peter ;
Schoelkopf, Bernhard ;
Smola, Alex J. .
BIOINFORMATICS, 2006, 22 (14) :E49-E57
[3]  
Bousmalis K, 2016, ADV NEUR IN, V29
[4]   Reusing the Task-specific Classifier as a Discriminator: Discriminator-free Adversarial Domain Adaptation [J].
Chen, Lin ;
Chen, Huaian ;
Wei, Zhixiang ;
Jin, Xin ;
Tan, Xiao ;
Jin, Yi ;
Chen, Enhong .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :7171-7180
[5]   Blending-target Domain Adaptation by Adversarial Meta-Adaptation Networks [J].
Chen, Ziliang ;
Zhuang, Jingyu ;
Liang, Xiaodan ;
Lin, Liang .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2243-2252
[6]  
Chu T, 2022, AAAI CONF ARTIF INTE, P472
[7]   Source-Free Domain Adaptation via Distribution Estimation [J].
Ding, Ning ;
Xu, Yixing ;
Tang, Yehui ;
Xu, Chao ;
Wang, Yunhe ;
Tao, Dacheng .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :7202-7212
[8]  
Dong JH, 2021, ADV NEUR IN, V34
[9]   Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain Adaptation [J].
Du, Zhekai ;
Li, Jingjing ;
Su, Hongzu ;
Zhu, Lei ;
Lu, Ke .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :3936-3945
[10]  
Ganin Y, 2015, PR MACH LEARN RES, V37, P1180