Towards Corruption-Agnostic Robust Domain Adaptation

被引:6
作者
Xu, Yifan [1 ,2 ]
Sheng, Kekai [3 ]
Dong, Weiming [1 ,4 ]
Wu, Baoyuan [5 ]
Xu, Changsheng [1 ,2 ]
Hu, Bao-Gang [1 ]
机构
[1] Chinese Acad Sci, NLPR, Inst Automat, 95 East Zhongguancun Rd, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, 95 East Zhongguancun Rd, Beijing 100190, Peoples R China
[3] Tencent Inc, Youtu Lab, 397 Tianlin Rd, Shanghai 201103, Peoples R China
[4] CASIA LLvis Joint Lab, 95 East Zhongguancun Rd, Beijing 100190, Peoples R China
[5] Chinese Univ Hong Kong, Shenzhen Res Inst Big Data, 2001 Longxiang Rd, Shenzhen 518172, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain adaptation; corruption robustness; transfer learning;
D O I
10.1145/3501800
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Great progress has been achieved in domain adaptation in decades. Existing works are always based on an ideal assumption that testing target domains are independent and identically distributed with training target domains. However, due to unpredictable corruptions (e.g., noise and blur) in real data, such as web images and real-world object detection, domain adaptation methods are increasingly required to be corruption robust on target domains. We investigate a new task, corruption-agnostic robust domain adaptation (CRDA), to be accurate on original data and robust against unavailable-for-training corruptions on target domains. This task is non-trivial due to the large domain discrepancy and unsupervised target domains. We observe that simple combinations of popular methods of domain adaptation and corruption robustness have suboptimal CRDA results. We propose a newapproach based on two technical insights into CRDA, as follows: (1) an easy-to-plug module called domain discrepancy generator (DDG) that generates samples that enlarge domain discrepancy to mimic unpredictable corruptions; (2) a simple but effective teacher-student scheme with contrastive loss to enhance the constraints on target domains. Experiments verify that DDG maintains or even improves its performance on original data and achieves better corruption robustness than baselines. Our code is available at: https://github.com/YifanXu74/CRDA.
引用
收藏
页数:16
相关论文
共 48 条
[1]  
[Anonymous], 2016, ARXIV
[2]  
[Anonymous], 2017, ARXIV170308119
[3]   Integrating structured biological data by Kernel Maximum Mean Discrepancy [J].
Borgwardt, Karsten M. ;
Gretton, Arthur ;
Rasch, Malte J. ;
Kriegel, Hans-Peter ;
Schoelkopf, Bernhard ;
Smola, Alex J. .
BIOINFORMATICS, 2006, 22 (14) :E49-E57
[4]  
Chen C, 2019, AAAI CONF ARTIF INTE, P3296
[5]   Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning [J].
Chen, Tianlong ;
Liu, Sijia ;
Chang, Shiyu ;
Cheng, Yu ;
Amini, Lisa ;
Wang, Zhangyang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :696-705
[6]  
Chen T, 2020, PR MACH LEARN RES, V119
[7]   Randaugment: Practical automated data augmentation with a reduced search space [J].
Cubuk, Ekin D. ;
Zoph, Barret ;
Shlens, Jonathon ;
Le, Quoc, V .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :3008-3017
[8]  
DanHendrycks Mantas Mazeika, 2019, P ADV NEURAL INFORM
[9]  
DanHendrycks Norman Mu, 2020, P INT C LEARN REPR
[10]  
Dodge S, 2017, 2017 26TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND NETWORKS (ICCCN 2017)