Learning to Detect Open Classes for Universal Domain Adaptation

被引:132
作者
Fu, Bo [1 ,2 ]
Cao, Zhangjie [1 ,2 ]
Long, Mingsheng [1 ,2 ]
Wang, Jianmin [1 ,2 ]
机构
[1] Tsinghua Univ, Sch Software, BNRist, Beijing, Peoples R China
[2] Tsinghua Univ, Res Ctr Big Data, Beijing, Peoples R China
来源
COMPUTER VISION - ECCV 2020, PT XV | 2020年 / 12360卷
关键词
Universal domain adaptation; Open class detection;
D O I
10.1007/978-3-030-58555-6_34
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Universal domain adaptation (UniDA) transfers knowledge between domains without any constraint on the label sets, extending the applicability of domain adaptation in the wild. In UniDA, both the source and target label sets may hold individual labels not shared by the other domain. A de facto challenge of UniDA is to classify the target examples in the shared classes against the domain shift. A more prominent challenge of UniDA is to mark the target examples in the target-individual label set (open classes) as "unknown". These two entangled challenges make UniDA a highly under-explored problem. Previous work on UniDA focuses on the classification of data in the shared classes and uses per-class accuracy as the evaluation metric, which is badly biased to the accuracy of shared classes. However, accurately detecting open classes is the mission-critical task to enable real universal domain adaptation. It further turns UniDA problem into a well-established close-set domain adaptation problem. Towards accurate open class detection, we propose Calibrated Multiple Uncertainties (CMU) with a novel transferability measure estimated by a mixture of uncertainty quantities in complementation: entropy, confidence and consistency, defined on conditional probabilities calibrated by a multi-classifier ensemble model. The new transferability measure accurately quantifies the inclination of a target example to the open classes. We also propose a novel evaluation metric called H-score, which emphasizes the importance of both accuracies of the shared classes and the "unknown" class. Empirical results under the UniDA setting show that CMU outperforms the state-of-the-art domain adaptation methods on all the evaluation metrics, especially by a large margin on the H-score.
引用
收藏
页码:567 / 583
页数:17
相关论文
共 45 条
[1]   Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks [J].
Bousmalis, Konstantinos ;
Silberman, Nathan ;
Dohan, David ;
Erhan, Dumitru ;
Krishnan, Dilip .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :95-104
[2]   Open Set Domain Adaptation for Image and Action Recognition [J].
Busto, Pau Panareda ;
Iqbal, Ahsan ;
Gall, Juergen .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (02) :413-429
[3]   Open Set Domain Adaptation [J].
Busto, Pau Panareda ;
Gall, Juergen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :754-763
[4]   Partial Adversarial Domain Adaptation [J].
Cao, Zhangjie ;
Ma, Lijia ;
Long, Mingsheng ;
Wang, Jianmin .
COMPUTER VISION - ECCV 2018, PT VIII, 2018, 11212 :139-155
[5]   Learning to Transfer Examples for Partial Domain Adaptation [J].
Cao, Zhangjie ;
You, Kaichao ;
Long, Mingsheng ;
Wang, Jianmin ;
Yang, Qiang .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2980-2989
[6]   Partial Transfer Learning with Selective Adversarial Networks [J].
Cao, Zhangjie ;
Long, Mingsheng ;
Wang, Jianmin ;
Jordan, Michael I. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :2724-2732
[7]   AutoDIAL: Automatic DomaIn Alignment Layers [J].
Carlucci, Fabio Maria ;
Porzi, Lorenzo ;
Caputo, Barbara ;
Ricci, Elisa ;
Bulo, Samuel Rota .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :5077-5085
[8]   Re-weighted Adversarial Adaptation Network for Unsupervised Domain Adaptation [J].
Chen, Qingchao ;
Liu, Yang ;
Wang, Zhaowen ;
Wassell, Ian ;
Chetty, Kevin .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :7976-7985
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]  
Gal Y, 2016, PR MACH LEARN RES, V48