Dynamic Label Smoothing and Semantic Transport for Unsupervised Domain Adaptation on Object Recognition

被引:7
作者
Ding, Feifei [1 ]
Li, Jianjun [1 ]
Tian, Wanyong [2 ]
Zhang, Shanqing [1 ]
Yuan, Wenqiang [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci & Engn, Hangzhou 310018, Peoples R China
[2] China Elect Technol Grp Corp CETC, Key Lab Data Link Technol, Xian 710071, Peoples R China
关键词
Unsupervised domain adaptation; label smoothing; memory bank; semantic alignment; DEEP; NETWORK;
D O I
10.1109/TCE.2023.3293841
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The application of domain adaptation techniques has emerged as a valuable approach for reducing the cost of data annotation in object recognition domains. Despite its usefulness, domain adaptation is often impeded by domain shift issues that can lead to suboptimal performance. To address this challenge, previous works have attempted to align the global distribution across two domains. However, this approach may not be adequate for handling misalignment near classifier boundaries, which can cause a bias towards the source domain. In this paper, we introduce a novel label-smoothing strategy and semantic transport optimization method for unsupervised domain adaptation. Our approach leverages a memory bank for dynamic smooth rate learning and employs a semantic alignment optimization that treats it as an optimal transport problem. We also integrate class proportions into the optimization to enhance the discriminative ability of target features. To further improve the performance, we incorporate these two strategies into adversarial-based adaptation methods. We conduct comprehensive experiments on three common benchmarks to evaluate the performance of our method. Our results show that our proposed approach achieves competitive performance compared to existing methods. We make our codes publicly available at https://github.com/feifei-cv/DLST.
引用
收藏
页码:1133 / 1140
页数:8
相关论文
共 52 条
[1]   A theory of learning from different domains [J].
Ben-David, Shai ;
Blitzer, John ;
Crammer, Koby ;
Kulesza, Alex ;
Pereira, Fernando ;
Vaughan, Jennifer Wortman .
MACHINE LEARNING, 2010, 79 (1-2) :151-175
[2]  
Chen MH, 2020, AAAI CONF ARTIF INTE, V34, P3521
[3]  
Chen XY, 2019, PR MACH LEARN RES, V97
[4]   Gradually Vanishing Bridge for Adversarial Domain Adaptation [J].
Cui, Shuhao ;
Wang, Shuhui ;
Zhuo, Junbao ;
Su, Chi ;
Huang, Qingming ;
Tian, Qi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :12452-12461
[5]   Crowd Counting by Using Top-k Relations: A Mixed Ground-Truth CNN Framework [J].
Dong, Li ;
Zhang, Haijun ;
Yang, Kai ;
Zhou, Dongliang ;
Shi, Jianyang ;
Ma, Jianghong .
IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2022, 68 (03) :307-316
[6]   Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain Adaptation [J].
Du, Zhekai ;
Li, Jingjing ;
Su, Hongzu ;
Zhu, Lei ;
Lu, Ke .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :3936-3945
[7]  
Ganin Y, 2016, J MACH LEARN RES, V17
[8]  
Grandvalet Y., 2004, P 17 INT C NEUR PROC, V17, P529
[9]  
Gretton A., 2006, Advances in Neural Information Processing Systems, V19
[10]   Minimum Class Confusion for Versatile Domain Adaptation [J].
Jin, Ying ;
Wang, Ximei ;
Long, Mingsheng ;
Wang, Jianmin .
COMPUTER VISION - ECCV 2020, PT XXI, 2020, 12366 :464-480