Improving Feature's Capability of Carrying Category-specific Information for Adversarial Domain Adaptation

被引:0
作者
Li, Yundong [1 ]
Lin, Chen [1 ]
Hu, Wei [1 ]
Dong, Han [1 ]
机构
[1] North China Univ Technol, Sch Informat Sci & Technol, Beijing, Peoples R China
来源
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2020年
基金
北京市自然科学基金;
关键词
deep learning; domain adaptation; generative adversarial network; transfer learning;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent research has shown that generative adversarial networks (GANs) have been successfully applied in aligning features for domain adaptation. However, the extracted features might lose category-specific information, because they are distinguished as either a source or a target during the adversarial training. To address this issue, a two-stage training framework consisting of two sets of GANs and a dedicated classifier is proposed in this study. In the pretraining stage, we use an encoder-decoder-classifier structure to obtain discriminative and representative features of the source domain and use it as a reference in the subsequent training. In the adversarial training stage, two sets of GANs are used to align target-domain features with those of the source domain and transfer samples of the target domain to the source domain simultaneously. A dedicated classifier is trained along with the adversarial loss to force the generated features of the target domain to carry category-specific information, which significantly improves classification performance. The features of the source domain stay intact in the adversarial training stage. Thus, our approach can alleviate the training burden of GANs. The proposed method has been validated on digital datasets and office31 datasets. Experimental results demonstrate that an average accuracy of 96.5% and 95.1% is achieved, which leads to superior or comparable performance to state-of-the-art results.
引用
收藏
页数:8
相关论文
共 45 条
[1]  
[Anonymous], 2011, DEEP LEARNING UNSUPE
[2]  
[Anonymous], 1988, P ADV NEUR INF PROC
[3]  
[Anonymous], 2014, ABS14123474 CORR
[4]  
[Anonymous], 2015, ACS SYM SER
[5]  
[Anonymous], 2014, UNSUPERVISED DOMAIN
[6]  
[Anonymous], 2015, INT C MACHINE LEARNI
[7]  
[Anonymous], 2017, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2017.244
[8]  
[Anonymous], 2017, P INT C MACH LEARN
[9]  
Arjovsky M, 2017, PR MACH LEARN RES, V70, P214
[10]  
Bousmalis K, 2016, ADV NEUR IN, V29