Lighter, Better, Faster Multi-source Domain Adaptation with Gaussian Mixture Models and Optimal Transport

被引:0
|
作者
Montesuma, Eduardo Fernandes [1 ]
Mboula, Fred Ngole [1 ]
Souloumiac, Antoine [1 ]
机构
[1] Univ Paris Saclay, CEA, LIST, F-91120 Palaiseau, France
关键词
Domain Adaptation; Optimal Transport; Gaussian Mixture Models;
D O I
10.1007/978-3-031-70365-2_2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we tackle Multi-Source Domain Adaptation (MSDA), a task in transfer learning where one adapts multiple heterogeneous, labeled source probability measures towards a different, unlabeled target measure. We propose a novel framework for MSDA, based on Optimal Transport (OT) and Gaussian Mixture Models (GMMs). Our framework has two key advantages. First, OT between GMMs can be solved efficiently via linear programming. Second, it provides a convenient model for supervised learning, especially classification, as components in the GMM can be associated with existing classes. Based on the GMM-OT problem, we propose a novel technique for calculating barycenters of GMMs. Based on this novel algorithm, we propose two new strategies for MSDA: GMM-Wasserstein Barycenter Transport (WBT) and GMM-Dataset Dictionary Learning (DaDiL). We empirically evaluate our proposed methods on four benchmarks in image classification and fault diagnosis, showing that we improve over the prior art while being faster and involving fewer parameters ((sic) Our code is publicly available at https://github.com/eddardd/gmm_msda).
引用
收藏
页码:21 / 38
页数:18
相关论文
共 50 条
  • [21] Multi-Source Domain Adaptation: A Causal View
    Zhang, Kun
    Gong, Mingming
    Schoelkopf, Bernhard
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 3150 - 3157
  • [22] Coupled Training for Multi-Source Domain Adaptation
    Amosy, Ohad
    Chechik, Gal
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1071 - 1080
  • [23] Dynamic Transfer for Multi-Source Domain Adaptation
    Li, Yunsheng
    Yuan, Lu
    Chen, Yinpeng
    Wang, Pei
    Vasconcelos, Nuno
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10993 - 11002
  • [24] Multi-Source Domain Adaptation for Object Detection
    Yao, Xingxu
    Zhao, Sicheng
    Xu, Pengfei
    Yang, Jufeng
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 3253 - 3262
  • [25] On the analysis of adaptability in multi-source domain adaptation
    Ievgen Redko
    Amaury Habrard
    Marc Sebban
    Machine Learning, 2019, 108 : 1635 - 1652
  • [26] Multi-Source Domain Adaptation with Sinkhorn Barycenter
    Komatsu, Tatsuya
    Matsui, Tomoko
    Gao, Junbin
    29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021), 2021, : 1371 - 1375
  • [27] Graphical Modeling for Multi-Source Domain Adaptation
    Xu, Minghao
    Wang, Hang
    Ni, Bingbing
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1727 - 1741
  • [28] Multi-Source Attention for Unsupervised Domain Adaptation
    Cui, Xia
    Bollegala, Danushka
    1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (AACL-IJCNLP 2020), 2020, : 873 - 883
  • [29] Multi-source Domain Adaptation for Face Recognition
    Yi, Haiyang
    Xu, Zhi
    Wen, Yimin
    Fan, Zhigang
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 1349 - 1354
  • [30] Transformer Based Multi-Source Domain Adaptation
    Wright, Dustin
    Augenstein, Isabelle
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 7963 - 7974