Beyond Sharing Weights for Deep Domain Adaptation

被引:293
|
作者
Rozantsev, Artem [1 ]
Salzmann, Mathieu [1 ]
Fua, Pascal [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Comp Vis Lab, CH-1015 Lausanne, Switzerland
关键词
Domain adaptation; deep learning; RECOGNITION; FEATURES;
D O I
10.1109/TPAMI.2018.2814042
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The performance of a classifier trained on data coming from a specific domain typically degrades when applied to a related but different one. While annotating many samples from the new domain would address this issue, it is often too expensive or impractical. Domain Adaptation has therefore emerged as a solution to this problem; It leverages annotated data from a source domain, in which it is abundant, to train a classifier to operate in a target domain, in which it is either sparse or even lacking altogether. In this context, the recent trend consists of learning deep architectures whose weights are shared for both domains, which essentially amounts to learning domain invariant features. Here, we show that it is more effective to explicitly model the shift from one domain to the other. To this end, we introduce a two-stream architecture, where one operates in the source domain and the other in the target domain. In contrast to other approaches, the weights in corresponding layers are related but not shared. We demonstrate that this both yields higher accuracy than state-of-the-art methods on several object recognition and detection tasks and consistently outperforms networks with shared weights in both supervised and unsupervised settings.
引用
收藏
页码:801 / 814
页数:14
相关论文
共 50 条
  • [31] An unsupervised deep domain adaptation approach for robust speech recognition
    Sun, Sining
    Zhang, Binbin
    Xie, Lei
    Zhang, Yanning
    NEUROCOMPUTING, 2017, 257 : 79 - 87
  • [32] A survey of deep domain adaptation based on label set classification
    Fan, Min
    Cai, Ziyun
    Zhang, Tengfei
    Wang, Baoyun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (27) : 39545 - 39576
  • [33] Domain Adaptation Deep Attention Network for Automatic Logo Detection and Recognition in Google Street View
    Yohannes, Ervin
    Lin, Chih-Yang
    Shih, Timothy K.
    Hong, Chen-Ya
    Enkhbat, Avirmed
    Utaminingrum, Fitri
    IEEE ACCESS, 2021, 9 : 102623 - 102635
  • [34] Coupled Real-Synthetic Domain Adaptation for Real-World Deep Depth Enhancement
    Gu, Xiao
    Guo, Yao
    Deligianni, Fani
    Yang, Guang-Zhong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 6343 - 6356
  • [35] CD-VulD: Cross-Domain Vulnerability Discovery Based on Deep Domain Adaptation
    Liu, Shigang
    Lin, Guanjun
    Qu, Lizhen
    Zhang, Jun
    De Vel, Olivier
    Montague, Paul
    Xiang, Yang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (01) : 438 - 451
  • [36] Deep learning with domain adaptation for accelerated projection-reconstruction MR
    Han, Yoseob
    Yoo, Jaejun
    Kim, Hak Hee
    Shin, Hee Jung
    Sung, Kyunghyun
    Ye, Jong Chul
    MAGNETIC RESONANCE IN MEDICINE, 2018, 80 (03) : 1189 - 1205
  • [37] Deep ladder reconstruction-classification network for unsupervised domain adaptation
    Deng, Wanxia
    Su, Zhuo
    Qiu, Qiang
    Zhao, Lingjun
    Kuang, Gangyao
    Pietikainen, Matti
    Xiao, Huaxin
    Liu, Li
    PATTERN RECOGNITION LETTERS, 2021, 152 : 398 - 405
  • [38] Deep domain adaptation for anti-spoofing in speaker verification systems
    Himawan, Ivan
    Villavicencio, Fernando
    Sridharan, Sridha
    Fookes, Clinton
    COMPUTER SPEECH AND LANGUAGE, 2019, 58 : 377 - 402
  • [39] Conditional Adaptation Deep Networks for Unsupervised Cross Domain Image Classifcation
    Chen, Yu
    Yang, ChunLing
    Zhang, Yan
    Li, YuZe
    PROCEEDINGS OF THE 2019 14TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2019), 2019, : 517 - 521
  • [40] Joint Learning of Multiple Latent Domains and Deep Representations for Domain Adaptation
    Wu, Xinxiao
    Chen, Jin
    Yu, Feiwu
    Yao, Mingyu
    Luo, Jiebo
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (05) : 2676 - 2687