Similar norm more transferable: Rethinking feature norms discrepancy in adversarial domain adaptation

被引:9
作者
Dan, Jun [1 ]
Liu, Mushui [1 ]
Xie, Chunfeng [2 ]
Yu, Jiawang [1 ]
Xie, Haoran [3 ]
Li, Ruokun [4 ,5 ]
Dong, Shunjie [4 ,5 ]
机构
[1] Zhejiang Univ, Coll Informat Sci & Elect Engn, Hangzhou 310027, Peoples R China
[2] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E1 4NS, England
[3] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[4] Shanghai Jiao Tong Univ, Ruijin Hosp, Dept Radiol, Sch Med, Shanghai 200025, Peoples R China
[5] Shanghai Jiao Tong Univ, Coll Hlth Sci & Technol, Sch Med, Shanghai 200025, Peoples R China
关键词
Transfer learning; Domain adaptation; Adversarial training; Discriminative feature; Feature norms;
D O I
10.1016/j.knosys.2024.111908
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial learning has become an effective paradigm for learning transferable features in domain adaptation. However, many previous adversarial domain adaptation methods inevitably damage the discriminative information contained in transferable features, which limits the potential of adversarial learning. In this paper, we explore the reason for this phenomenon and find that the model pays more attention to the alignment of feature norms than the learning of domain-invariant features during adversarial adaptation. Moreover, we observe that the feature norms contain some crucial category information, which is ignored in previous studies. To achieve better adversarial adaptation, we propose two novel feature norms alignment strategies: Histogram-guided Norms Alignment (HNA) and Transport-guided Norms Alignment (TNA). Both strategies model the feature norms from the distribution perspective, which not only facilitates the reduction of the norms discrepancy but also makes full use of discriminative information contained in the norms. Extensive experiments demonstrate that progressively aligning the feature norms distributions of two domains can effectively promote the capture of semantically rich shared features and significantly boost the model's transfer performance. We hope our findings can shed some light on future research of adversarial domain adaptation.
引用
收藏
页数:10
相关论文
共 72 条
[31]   Exploiting Variational Domain-Invariant User Embedding for Partially Overlapped Cross Domain Recommendation [J].
Liu, Weiming ;
Zheng, Xiaolin ;
Su, Jiajie ;
Hu, Mengling ;
Tan, Yanchao ;
Chen, Chaochao .
PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, :312-321
[32]   Collaborative Filtering with Attribution Alignment for Review-based Non-overlapped Cross Domain Recommendation [J].
Liu, Weiming ;
Zheng, Xiaolin ;
Hu, Mengling ;
Chen, Chaochao .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :1181-1190
[33]  
Liu WY, 2021, ADV NEUR IN, V34
[34]   COT: Unsupervised Domain Adaptation with Clustering and Optimal Transport [J].
Liu, Yang ;
Zhou, Zhipeng ;
Sun, Baigui .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :19998-20007
[35]  
Long MS, 2017, PR MACH LEARN RES, V70
[36]  
Long MS, 2018, ADV NEUR IN, V31
[37]  
Long MS, 2015, PR MACH LEARN RES, V37, P97
[38]  
Netzer Y., 2011, NIPS WORKSH DEEP LEA
[39]  
Nguyen K., 2022, INT C MACHINE LEARNI, P16656
[40]   A Survey on Transfer Learning [J].
Pan, Sinno Jialin ;
Yang, Qiang .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2010, 22 (10) :1345-1359