Lighter, Better, Faster Multi-source Domain Adaptation with Gaussian Mixture Models and Optimal Transport

被引:0
|
作者
Montesuma, Eduardo Fernandes [1 ]
Mboula, Fred Ngole [1 ]
Souloumiac, Antoine [1 ]
机构
[1] Univ Paris Saclay, CEA, LIST, F-91120 Palaiseau, France
关键词
Domain Adaptation; Optimal Transport; Gaussian Mixture Models;
D O I
10.1007/978-3-031-70365-2_2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we tackle Multi-Source Domain Adaptation (MSDA), a task in transfer learning where one adapts multiple heterogeneous, labeled source probability measures towards a different, unlabeled target measure. We propose a novel framework for MSDA, based on Optimal Transport (OT) and Gaussian Mixture Models (GMMs). Our framework has two key advantages. First, OT between GMMs can be solved efficiently via linear programming. Second, it provides a convenient model for supervised learning, especially classification, as components in the GMM can be associated with existing classes. Based on the GMM-OT problem, we propose a novel technique for calculating barycenters of GMMs. Based on this novel algorithm, we propose two new strategies for MSDA: GMM-Wasserstein Barycenter Transport (WBT) and GMM-Dataset Dictionary Learning (DaDiL). We empirically evaluate our proposed methods on four benchmarks in image classification and fault diagnosis, showing that we improve over the prior art while being faster and involving fewer parameters ((sic) Our code is publicly available at https://github.com/eddardd/gmm_msda).
引用
收藏
页码:21 / 38
页数:18
相关论文
共 50 条
  • [11] BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION
    Sun, Shi-Liang
    Shi, Hong-Lei
    PROCEEDINGS OF 2013 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS (ICMLC), VOLS 1-4, 2013, : 24 - 28
  • [12] Multi-Source Survival Domain Adaptation
    Shaker, Ammar
    Lawrence, Carolin
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9752 - 9762
  • [13] Optimal Transport for Gaussian Mixture Models
    Chen, Yongxin
    Georgiou, Tryphon T.
    Tannenbaum, Allen
    IEEE ACCESS, 2019, 7 : 6269 - 6278
  • [14] Domain knowledge boosted adaptation: Leveraging vision-language models for multi-source domain adaptation
    He, Yuwei
    Feng, Juexiao
    Ding, Guiguang
    Guo, Yuchen
    He, Tao
    NEUROCOMPUTING, 2025, 619
  • [15] Multi-source multi-modal domain adaptation
    Zhao, Sicheng
    Jiang, Jing
    Tang, Wenbo
    Zhu, Jiankun
    Chen, Hui
    Xu, Pengfei
    Schuller, Bjorn W.
    Tao, Jianhua
    Yao, Hongxun
    Ding, Guiguang
    INFORMATION FUSION, 2025, 117
  • [16] Wasserstein Barycenter for Multi-Source Domain Adaptation
    Montesuma, Eduardo Fernandes
    Mboula, Fred Maurice Ngole
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16780 - 16788
  • [17] Unsupervised Multi-source Domain Adaptation for Regression
    Richard, Guillaume
    de Mathelin, Antoine
    Hebrail, Georges
    Mougeot, Mathilde
    Vayatis, Nicolas
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT I, 2021, 12457 : 395 - 411
  • [18] On the analysis of adaptability in multi-source domain adaptation
    Redko, Ievgen
    Habrard, Amaury
    Sebban, Marc
    MACHINE LEARNING, 2019, 108 (8-9) : 1635 - 1652
  • [19] Multi-source Domain Adaptation for Semantic Segmentation
    Zhao, Sicheng
    Li, Bo
    Yue, Xiangyu
    Gu, Yang
    Xu, Pengfei
    Hu, Runbo
    Chai, Hua
    Keutzer, Kurt
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [20] Multi-Source Contribution Learning for Domain Adaptation
    Li, Keqiuyin
    Lu, Jie
    Zuo, Hua
    Zhang, Guangquan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) : 5293 - 5307