Adversarial open set domain adaptation via progressive selection of transferable target samples

被引:14
作者
Gao, Yuan [1 ]
Ma, Andy J. [1 ,3 ]
Gao, Yue [1 ]
Wang, Jinpeng [2 ]
Pan, YoungSun [1 ]
机构
[1] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou, Peoples R China
[2] Sun Yat Sen Univ, Sch Elect & Informat Technol, Guangzhou, Peoples R China
[3] Minist Educ, Key Lab Machine Intelligence & Adv Comp, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Open set domain adaptation; Adversarial learning; Progressive selection;
D O I
10.1016/j.neucom.2020.05.032
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, many Unsupervised Domain Adaptation (UDA) methods have been proposed to tackle the domain shift problem. Most existing UDA methods are derived for Close Set Domain Adaptation (CSDA) in which source and target domains are assumed to share the same label space. However, target domain may contain unknown class different from the known ones in the source domain in practice, i.e., Open Set Domain Adaptation (OSDA). Due to the presence of unknown class, aligning the whole distribution of the source and target domain for OSDA as in the previous methods will lead to negative transfer. Existing methods developed for OSDA attempt to assign smaller weights to target samples of unknown class. Despite promising performance achieved by existing methods, the samples of the unknown class are still used for distribution alignment, which makes the model suffer from the risk of negative transfer. Instead of reweighting, this paper presents a novel method namely Thresholded Domain Adversarial Network (ThDAN), which progressively selects transferable target samples for distribution alignment. Based on the fact that samples from the known classes must be more transferable than target samples of the unknown one, we derive a criterion to quantify the transferability by constructing classifiers to categorize known classes and to discriminate unknown class. In ThDAN, an adaptive threshold is calculated by averaging transferability scores of source domain samples to select target samples for training. The threshold is tweaked progressively during the training process so that more and more target samples from the known classes can be correctly selected for adversarial training. Extensive experiments show that the proposed method outperforms state-of-the-art domain adaptation and open set recognition approaches on benchmarks. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:174 / 184
页数:11
相关论文
共 47 条
  • [21] Jain LP, 2014, LECT NOTES COMPUT SC, V8691, P393, DOI 10.1007/978-3-319-10578-9_26
  • [22] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [23] Li M, 2018, 2018 4TH INTERNATIONAL CONFERENCE ON EDUCATION, MANAGEMENT AND INFORMATION TECHNOLOGY (ICEMIT 2018), P1184
  • [24] Separate to Adapt: Open Set Domain Adaptation via Progressive Separation
    Liu, Hong
    Cao, Zhangjie
    Long, Mingsheng
    Wang, Jianmin
    Yang, Qiang
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2922 - 2931
  • [25] Coupled local-global adaptation for multi-source transfer learning
    Liu, Jieyan
    Li, Jingjing
    Lu, Ke
    [J]. NEUROCOMPUTING, 2018, 275 : 247 - 254
  • [26] Long M., 2016, Advances in neural information processing systems, P136
  • [27] Long M., 2017, Proc Mach Learn Res, P2208
  • [28] Long MS, 2015, PR MACH LEARN RES, V37, P97
  • [29] Semantic invariant cross-domain image generation with generative adversarial networks
    Mao, Xiaofeng
    Wang, Shuhui
    Zheng, Liying
    Huang, Qingming
    [J]. NEUROCOMPUTING, 2018, 293 : 55 - 63
  • [30] Nearest neighbors distance ratio open-set classifier
    Mendes Junior, Pedro R.
    de Souza, Roberto M.
    Werneck, Rafael de O.
    Stein, Bernardo V.
    Pazinato, Daniel V.
    de Almeida, Waldir R.
    Penatti, Otavio A. B.
    Torres, Ricardo da S.
    Rocha, Anderson
    [J]. MACHINE LEARNING, 2017, 106 (03) : 359 - 386