Equity in Unsupervised Domain Adaptation by Nuclear Norm Maximization

被引:12
作者
Wang, Mengzhu [1 ]
Wang, Shanshan [2 ,3 ]
Yang, Xun [4 ]
Yuan, Jianlong [5 ]
Zhang, Wenju [6 ]
机构
[1] Hebei Univ Technol, Sch Artificial Intelligence, Tianjin 300401, Peoples R China
[2] Anhui Univ, Informat Mat & Intelligent Sensing Lab Anhui Prov, Hefei 230601, Peoples R China
[3] Anhui Univ, Inst Phys Sci & Informat Technol, Hefei 230601, Peoples R China
[4] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Peoples R China
[5] Alibaba Grp, Beijing 100102, Peoples R China
[6] Natl Univ Def Technol, Changsha 410073, Hunan, Peoples R China
关键词
Transfer learning; domain adaptation; image classification; nuclear norm; KERNEL;
D O I
10.1109/TCSVT.2023.3346444
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Nuclear norm maximization has shown the power to enhance the transferability of unsupervised domain adaptation model (UDA) in an empirical scheme. In this paper, we identify a new property termed equity, which indicates the balance degree of predicted classes, to demystify the efficacy of nuclear norm maximization for UDA theoretically. With this in mind, we offer a new discriminability-and-equity maximization paradigm built on squares loss, such that predictions are equalized explicitly. To verify its feasibility and flexibility, two new losses termed Class Weighted Squares Maximization (CWSM) and Normalized Squares Maximization (NSM), are proposed to maximize both predictive discriminability and equity, from the class level and the sample level, respectively. Importantly, we theoretically relate these two novel losses (i.e., CWSM and NSM) to the equity maximization under mild conditions, and empirically suggest the importance of the predictive equity in UDA. Moreover, it is very efficient to realize the equity constraints in both losses. Experiments of cross-domain image classification on three popular benchmark datasets show that both CWSM and NSM contribute to outperforming the corresponding counterparts.
引用
收藏
页码:5533 / 5545
页数:13
相关论文
共 64 条
[1]  
[Anonymous], 2013, INT C MACHINE LEARNI
[2]  
[Anonymous], 2011, P INT C MACH LEARN B
[3]  
Beck Amir, 2014, Introduction to nonlinear optimization: Theory, algorithms, and applications with MATLAB
[4]   A theory of learning from different domains [J].
Ben-David, Shai ;
Blitzer, John ;
Crammer, Koby ;
Kulesza, Alex ;
Pereira, Fernando ;
Vaughan, Jennifer Wortman .
MACHINE LEARNING, 2010, 79 (1-2) :151-175
[5]   Domain Adaptation for Semantic Segmentation with Maximum Squares Loss [J].
Chen, Minghao ;
Xue, Hongyang ;
Cai, Deng .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :2090-2099
[6]  
Chen ZH, 2020, PROC CVPR IEEE, P12703, DOI 10.1109/CVPR42600.2020.01272
[7]   Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations [J].
Cui, Shuhao ;
Wang, Shuhui ;
Zhuo, Junbao ;
Li, Liang ;
Huang, Qingming ;
Tian, Qi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3940-3949
[8]  
Donahue J, 2014, PR MACH LEARN RES, V32
[9]  
Ganin Y, 2016, J MACH LEARN RES, V17
[10]  
Ge SM, 2020, AAAI CONF ARTIF INTE, V34, P10845