Understanding Self-Training for Gradual Domain Adaptation

被引:0
|
作者
Kumar, Ananya [1 ]
Ma, Tengyu [1 ]
Liang, Percy [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119 | 2020年 / 119卷
关键词
COVARIATE SHIFT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning systems must adapt to data distributions that evolve over time, in applications ranging from sensor networks and self-driving car perception modules to brain-machine interfaces. Traditional domain adaptation is only guaranteed to work when the distribution shift is small; empirical methods combine several heuristics for larger shifts but can be dataset specific. To adapt to larger shifts we consider gradual domain adaptation, where the goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain. We prove the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error. The theoretical analysis leads to algorithmic insights, highlighting that regularization and label sharpening are essential even when we have infinite data. Leveraging the gradual shift structure leads to higher accuracies on a rotating MNIST dataset, a forest Cover Type dataset, and a realistic Portraits dataset.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Cycle Self-Training for Domain Adaptation
    Liu, Hong
    Wang, Jianmin
    Long, Mingsheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Unsupervised domain adaptation with self-training for weed segmentation
    Huang, Yingchao
    Hussein, Amina E.
    Wang, Xin
    Bais, Abdul
    Yao, Shanshan
    Wilder, Tanis
    INTELLIGENT SYSTEMS WITH APPLICATIONS, 2025, 25
  • [3] Self-Training with Contrastive Learning for Adversarial Domain Adaptation
    Zhang, Xingyi (xyzhanghust@gmail.com), 1600, Institute of Electrical and Electronics Engineers Inc.
  • [4] Adversarial Domain Adaptation Enhanced via Self-training
    Altinel, Fazil
    Akkaya, Ibrahim Batuhan
    29TH IEEE CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS (SIU 2021), 2021,
  • [5] Unsupervised Domain Adaptation with Multiple Domain Discriminators and Adaptive Self-Training
    Spadotto, Teo
    Toldo, Marco
    Michieli, Umberto
    Zanuttigh, Pietro
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2845 - 2852
  • [6] Improve conditional adversarial domain adaptation using self-training
    Wang, Zi
    Sun, Xiaoliang
    Su, Ang
    Wang, Gang
    Li, Yang
    Yu, Qifeng
    IET IMAGE PROCESSING, 2021, 15 (10) : 2169 - 2178
  • [7] Energy-constrained Self-training for Unsupervised Domain Adaptation
    Liu, Xiaofeng
    Hu, Bo
    Liu, Xiongchang
    Lu, Jun
    You, Jane
    Kong, Lingsheng
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7515 - 7520
  • [8] SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation
    Zhu, Wanqing
    Yin, Jia-Li
    Chen, Bo-Hao
    Liu, Ximeng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 3852 - 3860
  • [9] Self-training transformer for source-free domain adaptation
    Guanglei Yang
    Zhun Zhong
    Mingli Ding
    Nicu Sebe
    Elisa Ricci
    Applied Intelligence, 2023, 53 : 16560 - 16574
  • [10] Self-training transformer for source-free domain adaptation
    Yang, Guanglei
    Zhong, Zhun
    Ding, Mingli
    Sebe, Nicu
    Ricci, Elisa
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16560 - 16574