Information Maximizing Adaptation Network With Label Distribution Priors for Unsupervised Domain Adaptation

被引:11
作者
Wang, Pei [1 ]
Yang, Yun [2 ]
Xia, Yuelong [1 ]
Wang, Kun [1 ]
Zhang, Xingyi [3 ]
Wang, Song [4 ]
机构
[1] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650504, Yunnan, Peoples R China
[2] Yunnan Univ, Natl Pilot Sch Software, Kunming 650091, Yunnan, Peoples R China
[3] Anhui Univ, Sch Comp Sci & Technol, Key Lab Intelligent Comp & Signal Proc, Minist Educ, Hefei 230039, Peoples R China
[4] Univ South Carolina, Coll Engn & Comp, Columbia, SC 29208 USA
关键词
Information theory; label distribution priors; mutual information; unsupervised domain adaptation;
D O I
10.1109/TMM.2022.3203574
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unsupervised domain adaptation, which transfers knowledge from the source domain to the target domain, has still been a challenging problem. However, previous domain adaptation methods typically minimize the domain discrepancy by using the pseudo target labels. Since the pseudo labels can be noisy, which may cause misalignment and unsatisfying adaptation performance. To address the above challenges, we propose an information maximization adaptation network with label distribution priors. We revisit feature alignment in unsupervised domain adaptation from the perspective of distribution alignment, and find that learning discriminant feature representation requires to minimizing distribution discrepancy and maximizing source mutual information between the outputs of the classifier and feature representations. Due to domain shift, maximizing target mutual information may align features to incorrect class directly. We propose a weighted target mutual information by re-weighting the estimated mutual information via the mean prediction confidence in mini-batch, which can eliminate the negative impact of inaccurate estimation. In addition, we introduce a regularization term of label priors distribution to encourage the similarity to the real label distribution. Extensive experimental results on three benchmark datasets show that our proposed method can achieve remarkable results compared with previous methods.
引用
收藏
页码:6026 / 6039
页数:14
相关论文
共 55 条
[1]   Unsupervised Multi-source Domain Adaptation Without Access to Source Data [J].
Ahmed, Sk Miraj ;
Raychaudhuri, Dripta S. ;
Paul, Sujoy ;
Oymak, Samet ;
Roy-Chowdhury, Amit K. .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :10098-10107
[2]   A theory of learning from different domains [J].
Ben-David, Shai ;
Blitzer, John ;
Crammer, Koby ;
Kulesza, Alex ;
Pereira, Fernando ;
Vaughan, Jennifer Wortman .
MACHINE LEARNING, 2010, 79 (1-2) :151-175
[3]  
Ben-David Shai., 2006, P NIPS, P137
[4]  
Chen JH, 2022, Arxiv, DOI arXiv:2201.03102
[5]   Domain Adaptation for Semantic Segmentation with Maximum Squares Loss [J].
Chen, Minghao ;
Xue, Hongyang ;
Cai, Deng .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :2090-2099
[6]  
Chen XY, 2019, PR MACH LEARN RES, V97
[7]   Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations [J].
Cui, Shuhao ;
Wang, Shuhui ;
Zhuo, Junbao ;
Li, Liang ;
Huang, Qingming ;
Tian, Qi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3940-3949
[8]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[9]  
Donahue J, 2014, PR MACH LEARN RES, V32
[10]  
French G., 2018, P INT C LEARN REPR V