Automatic Loss Function Search for Adversarial Unsupervised Domain Adaptation

被引:23
作者
Mei, Zhen [1 ]
Ye, Peng [1 ]
Ye, Hancheng [1 ]
Li, Baopu [2 ]
Guo, Jinyang [3 ]
Chen, Tao [1 ]
Ouyang, Wanli [4 ]
机构
[1] Fudan Univ, Sch Informat Sci & Technol, Shanghai 200433, Peoples R China
[2] Oracle, Redwood City, CA 94065 USA
[3] Beihang Univ, Inst Artificial Intelligence, State Key Lab Software Dev Environm, Beijing 100191, Peoples R China
[4] Univ Sydney, Sch Elect & Informat Engn, Sydney, NSW 2006, Australia
基金
中国国家自然科学基金;
关键词
Training; Search problems; Feature extraction; Task analysis; Optimization; Entropy; Semantics; AutoML; unsupervised domain adaptation; loss function search;
D O I
10.1109/TCSVT.2023.3260246
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unsupervised domain adaption (UDA) aims to reduce the domain gap between labeled source and unlabeled target domains. Many prior works exploit adversarial learning that leverages pre-designed discriminators to drive the network for aligning distributions between domains. However, most of them do not consider the degeneration of the domain discriminators caused by the gradually dominating gradients of aligned target samples during training, and they still suffer from the cross-domain semantic mismatch problem in the learned feature space. Hence, this paper attempts to understand and solve both issues from the lens of optimization loss and propose an automatic loss function search for adversarial domain adaptation (ALSDA). First, we extend the common adversarial loss by adding an adjustable hyper-parameter that can re-weight the gradients assigned to target samples, so that the domain discriminator can impose consecutive and influential driving forces for domain alignment. Meanwhile, we upgrade the traditional orthogonality loss with class-wisely adjustable hyper-parameters that can strengthen the cross-domain feature separation. Since manually determining the optimal loss functions requires expensive expert efforts, we leverage the popular AutoML to automatically search for the optimal loss functions from a pre-defined novel and unique search space for UDA. Further, to enable the loss function search when the target domain is unlabeled, we introduce a simple-but-effective entropy-guided search strategy with the aid of REINFORCE learning. Extensive experiments on various typical baselines and benchmark datasets such as Office-Home, Office-31, and Birds-31 have been conducted, and the results validate the generalization and superiority of the proposed ALSDA.
引用
收藏
页码:5868 / 5881
页数:14
相关论文
共 54 条
[51]   Auto-FSL: Searching the Attribute Consistent Network for Few-Shot Learning [J].
Zhang, Lingling ;
Wang, Shaowei ;
Chang, Xiaojun ;
Liu, Jun ;
Ge, Zongyuan ;
Zheng, Qinghua .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (03) :1213-1223
[52]  
Zhang XY, 2021, PROC CVPR IEEE, P10902, DOI [10.1049/icp.2021.0389, 10.1109/CVPR46437.2021.01076]
[53]   Domain-Symmetric Networks for Adversarial Domain Adaptation [J].
Zhang, Yabin ;
Tang, Hui ;
Jia, Kui ;
Tan, Mingkui .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5026-5035
[54]  
Zhang YC, 2019, PR MACH LEARN RES, V97