Similar norm more transferable: Rethinking feature norms discrepancy in adversarial domain adaptation

被引:9
作者
Dan, Jun [1 ]
Liu, Mushui [1 ]
Xie, Chunfeng [2 ]
Yu, Jiawang [1 ]
Xie, Haoran [3 ]
Li, Ruokun [4 ,5 ]
Dong, Shunjie [4 ,5 ]
机构
[1] Zhejiang Univ, Coll Informat Sci & Elect Engn, Hangzhou 310027, Peoples R China
[2] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E1 4NS, England
[3] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[4] Shanghai Jiao Tong Univ, Ruijin Hosp, Dept Radiol, Sch Med, Shanghai 200025, Peoples R China
[5] Shanghai Jiao Tong Univ, Coll Hlth Sci & Technol, Sch Med, Shanghai 200025, Peoples R China
关键词
Transfer learning; Domain adaptation; Adversarial training; Discriminative feature; Feature norms;
D O I
10.1016/j.knosys.2024.111908
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial learning has become an effective paradigm for learning transferable features in domain adaptation. However, many previous adversarial domain adaptation methods inevitably damage the discriminative information contained in transferable features, which limits the potential of adversarial learning. In this paper, we explore the reason for this phenomenon and find that the model pays more attention to the alignment of feature norms than the learning of domain-invariant features during adversarial adaptation. Moreover, we observe that the feature norms contain some crucial category information, which is ignored in previous studies. To achieve better adversarial adaptation, we propose two novel feature norms alignment strategies: Histogram-guided Norms Alignment (HNA) and Transport-guided Norms Alignment (TNA). Both strategies model the feature norms from the distribution perspective, which not only facilitates the reduction of the norms discrepancy but also makes full use of discriminative information contained in the norms. Extensive experiments demonstrate that progressively aligning the feature norms distributions of two domains can effectively promote the capture of semantically rich shared features and significantly boost the model's transfer performance. We hope our findings can shed some light on future research of adversarial domain adaptation.
引用
收藏
页数:10
相关论文
共 72 条
[1]  
Chapelle O., 2005, International workshop on artificial intelligence and statistics, V2005, P57
[2]   Progressive Feature Alignment for Unsupervised Domain Adaptation [J].
Chen, Chaoqi ;
Xie, Weiping ;
Huang, Wenbing ;
Rong, Yu ;
Ding, Xinghao ;
Huang, Yue ;
Xu, Tingyang ;
Huang, Junzhou .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :627-636
[3]  
Chen MH, 2020, AAAI CONF ARTIF INTE, V34, P3521
[4]  
Chen XY, 2019, PR MACH LEARN RES, V97
[5]  
Chuang C.-Y., 2023, INT C MACHINE LEARNI, P6228
[6]  
Courty N, 2017, ADV NEUR IN, V30
[7]   Optimal Transport for Domain Adaptation [J].
Courty, Nicolas ;
Flamary, Remi ;
Tuia, Devis ;
Rakotomamonjy, Alain .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (09) :1853-1865
[8]  
CUTURI M., 2013, Advances in neural information processing systems, V2, P2292
[9]   DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation [J].
Damodaran, Bharath Bhushan ;
Kellenberger, Benjamin ;
Flamary, Remi ;
Tuia, Devis ;
Courty, Nicolas .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :467-483
[10]   Trust-aware conditional adversarial domain adaptation with feature norm alignment [J].
Dan, Jun ;
Jin, Tao ;
Chi, Hao ;
Dong, Shunjie ;
Xie, Haoran ;
Cao, Keying ;
Yang, Xinjing .
NEURAL NETWORKS, 2023, 168 :518-530