Conditional Bures Metric for Domain Adaptation

被引:49
作者
Luo, You-Wei [1 ]
Ren, Chuan-Xian [1 ,2 ]
机构
[1] Sun Yat Sen Univ, Sch Math, Guangzhou, Guangdong, Peoples R China
[2] Pazhou Lab, Guangzhou, Peoples R China
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
基金
中国国家自然科学基金;
关键词
KERNEL;
D O I
10.1109/CVPR46437.2021.01377
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a vital problem in classification-oriented transfer, unsupervised domain adaptation (UDA) has attracted widespread attention in recent years. Previous UDA methods assume the marginal distributions of different domains are shifted while ignoring the discriminant information in the label distributions. This leads to classification performance degeneration in real applications. In this work, we focus on the conditional distribution shift problem which is of great concern to current conditional invariant models. We aim to seek a kernel covariance embedding for conditional distribution which remains yet unexplored. Theoretically, we propose the Conditional Kernel Bures (CKB) metric for characterizing conditional distribution discrepancy, and derive an empirical estimation for the CKB metric without introducing the implicit kernel feature map. It provides an interpretable approach to understand the knowledge transfer mechanism. The established consistency theory of the empirical estimation provides a theoretical guarantee for convergence. A conditional distribution matching network is proposed to learn the conditional invariant and discriminative features for UDA. Extensive experiments and analysis show the superiority of our proposed model.
引用
收藏
页码:13984 / 13993
页数:10
相关论文
共 39 条
[1]   JOINT MEASURES AND CROSS-COVARIANCE OPERATORS [J].
BAKER, CR .
TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 1973, 186 (459) :273-289
[2]  
Ben-David S., 2006, NeurIPS
[3]  
Bengio Y., 2005, Advances in Neural Information Processing Systems (NeurIPS)
[4]   On the Bures-Wasserstein distance between positive definite matrices [J].
Bhatia, Rajendra ;
Jain, Tanvi ;
Lim, Yongdo .
EXPOSITIONES MATHEMATICAE, 2019, 37 (02) :165-191
[5]  
Caputo B., 2014, Lect. Notes Comput. Sci., P192
[6]  
Courty N., 2017, P ADV NEUR INF PROC, V30, P3730
[7]   Optimal Transport for Domain Adaptation [J].
Courty, Nicolas ;
Flamary, Remi ;
Tuia, Devis ;
Rakotomamonjy, Alain .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (09) :1853-1865
[8]   DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation [J].
Damodaran, Bharath Bhushan ;
Kellenberger, Benjamin ;
Flamary, Remi ;
Tuia, Devis ;
Courty, Nicolas .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :467-483
[9]   KERNEL DIMENSION REDUCTION IN REGRESSION [J].
Fukumizu, Kenji ;
Bach, Francis R. ;
Jordan, Michael I. .
ANNALS OF STATISTICS, 2009, 37 (04) :1871-1905
[10]  
Fukumizu Kenji, 2009, ICML, P961