Task Nuisance Filtration for Unsupervised Domain Adaptation

被引:1
作者
Uliel, David [1 ]
Giryes, Raja [1 ]
机构
[1] Tel Aviv Univ, Dept Elect Engn, IL-6997801 Tel Aviv, Israel
来源
IEEE OPEN JOURNAL OF SIGNAL PROCESSING | 2025年 / 6卷
关键词
Feature extraction; Training; Loss measurement; Data mining; Random variables; Independent component analysis; Entropy; Adaptation models; Weight measurement; Blind source separation; domain adaptation; information theory; mutual information; machine learning; ALGORITHMS;
D O I
10.1109/OJSP.2025.3536850
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In unsupervised domain adaptation (UDA) labeled data is available for one domain (Source Domain) which is generated according to some distribution, and unlabeled data is available for a second domain (Target Domain) which is generated from a possibly different distribution but has the same task. The goal is to learn a model that performs well on the target domain although labels are available only for the source data. Many recent works attempt to align the source and the target domains by matching their marginal distributions in a learned feature space. In this paper, we address the domain difference as a nuisance, and enables better adaptability of the domains, by encouraging minimality of the target domain representation, disentanglement of the features, and a smoother feature space that cluster better the target data. To this end, we use the information bottleneck theory and a classical technique from the blind source separation framework, namely, ICA (independent components analysis). We show that these concepts can improve performance of leading domain adaptation methods on various domain adaptation benchmarks.
引用
收藏
页码:303 / 311
页数:9
相关论文
共 65 条
[1]  
Abid A, 2019, Arxiv, DOI arXiv:1902.04601
[2]  
Achille A, 2018, J MACH LEARN RES, V19
[3]   Information Dropout: Learning Optimal Representations Through Noisy Computation [J].
Achille, Alessandro ;
Soatto, Stefano .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (12) :2897-2905
[4]   A theory of learning from different domains [J].
Ben-David, Shai ;
Blitzer, John ;
Crammer, Koby ;
Kulesza, Alex ;
Pereira, Fernando ;
Vaughan, Jennifer Wortman .
MACHINE LEARNING, 2010, 79 (1-2) :151-175
[5]  
Bermudez-Chacon R., 2020, P INT C LEARN REPR
[6]   AutoDIAL: Automatic DomaIn Alignment Layers [J].
Carlucci, Fabio Maria ;
Porzi, Lorenzo ;
Caputo, Barbara ;
Ricci, Elisa ;
Bulo, Samuel Rota .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :5077-5085
[7]   Self-Ensembling with GAN-based Data Augmentation for Domain Adaptation in Semantic Segmentation [J].
Choi, Jaehoon ;
Kim, Taekyung ;
Kim, Changick .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6829-6839
[8]  
Coates A., 2011, J MACH LEARN RES, V15, P215
[9]  
Cui H, 2024, AAAI CONF ARTIF INTE, P8329
[10]   Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations [J].
Cui, Shuhao ;
Wang, Shuhui ;
Zhuo, Junbao ;
Li, Liang ;
Huang, Qingming ;
Tian, Qi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3940-3949