Generic network for domain adaptation based on self-supervised learning and deep clustering

被引:17
作者
Baffour, Adu Asare [1 ]
Qin, Zhen [1 ,2 ]
Geng, Ji [1 ,2 ]
Ding, Yi [1 ,2 ]
Deng, Fuhu [1 ,2 ]
Qin, Zhiguang [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu 610054, Peoples R China
[2] Network & Data Secur Key Lab Sichuan Prov, Chengdu 610054, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain adaptation; Self-supervised learning; Deep clustering; Image recognition; Pretext task;
D O I
10.1016/j.neucom.2021.12.099
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain adaptation methods train a model to find similar feature representations between a source and target domain. Recent methods leverage self-supervised learning to discover the analogous representations of the two domains. However, prior self-supervised methods have three significant drawbacks: (1) leveraging pretext tasks that are susceptible to learning low-level representations, (2) aligning the two domains using adversarial loss without considering if the extracted features are low-level representations, (3) the models are not flexible to accommodate various proportions of target labels, i.e., they assume target labels are always available. This paper presents a Generic Domain Adaptation Network (GDAN) to address these issues. First, we introduce a criterion based on instance discrimination to select appropriate pretext tasks to learn high-level domain invariant representations. Then, we propose a semantic neighbor cluster to align the two domain features. The semantic neighbor cluster implements a clustering technique in a feature embedding space to form clusters according to high-level semantic similarities. Finally, we present a weighted target loss function to balance the model weights according to the target labels. This loss function makes GDAN flexible for semi-supervised scenarios, i.e., partly labeled target data. We evaluate the proposed methods on four domain adaptation benchmark datasets. The experiment findings show that the proposed methods align the two domains well and achieve competitive results. (c) 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:126 / 136
页数:11
相关论文
共 55 条
[51]   AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than Data [J].
Zhang, Liheng ;
Qi, Guo-Jun ;
Wang, Liqiang ;
Luo, Jiebo .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2542-2550
[52]   Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction [J].
Zhang, Richard ;
Isola, Phillip ;
Efros, Alexei A. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :645-654
[53]   Twin self-supervision based semi-supervised learning (TS-SSL): Retinal anomaly classification in SD-OCT images [J].
Zhang, Yuhan ;
Li, Mingchao ;
Ji, Zexuan ;
Fan, Wen ;
Yuan, Songtao ;
Liu, Qinghuai ;
Chen, Qiang .
NEUROCOMPUTING, 2021, 462 :491-505
[54]   A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [J].
Zhao, Sicheng ;
Yue, Xiangyu ;
Zhang, Shanghang ;
Li, Bo ;
Zhao, Han ;
Wu, Bichen ;
Krishna, Ravi ;
Gonzalez, Joseph E. ;
Sangiovanni-Vincentelli, Alberto L. ;
Seshia, Sanjit A. ;
Keutzer, Kurt .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (02) :473-493
[55]   Semantic adaptation network for unsupervised domain adaptation [J].
Zhou, Qiang ;
Zhou, Wen'an ;
Wang, Shirui .
NEUROCOMPUTING, 2021, 454 :313-323