Invariant models for causal transfer learning

被引:0
|
作者
Rojas-Carulla, Mateo [1 ,2 ]
Schölkopf, Bernhard [1 ]
Turner, Richard [2 ]
Peters, Jonas [3 ]
机构
[1] Max Planck Institute for Intelligent Systems, Tubingen, Germany
[2] Department of Engineering, Univ. of Cambridge, Cambridge, United Kingdom
[3] Department of Mathematical Sciences, Univ. of Copenhagen, Copenhagen, Denmark
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Methods of transfer learning try to combine knowledge from several related tasks (or domains) to improve performance on a test task. Inspired by causal methodology, we relax the usual covariate shift assumption and assume that it holds true for a subset of predictor variables: The conditional distribution of the target variable given this subset of predictors is invariant over all tasks. We show how this assumption can be motivated from ideas in the field of causality. We focus on the problem of Domain Generalization, in which no examples from the test task are observed. We prove that in an adversarial setting using this subset for prediction is optimal in Domain Generalization; we further provide examples, in which the tasks are sufficiently diverse and the estimator therefore outperforms pooling the data, even on average. If examples from the test task are available, we also provide a method to transfer knowledge from the training tasks and exploit all available features for prediction. However, we provide no guarantees for this method. We introduce a practical method which allows for automatic inference of the above subset and provide corresponding code. We present results on synthetic data sets and a gene deletion data set. © 2018 Mateo Rojas-Carulla and Bernhard Scholkopf and Richard Turner and Jonas Peters.
引用
收藏
相关论文
共 50 条
  • [11] Conditional learning through causal models
    Jonathan Vandenburgh
    Synthese, 2021, 199 : 2415 - 2437
  • [12] Learning symmetric causal independence models
    Jurgelenaite, Rasa
    Heskes, Tom
    MACHINE LEARNING, 2008, 71 (2-3) : 133 - 153
  • [13] Learning and Testing Causal Models with Interventions
    Acharya, Jayadev
    Bhattacharyya, Arnab
    Daskalakis, Constantinos
    Kandasamy, Saravanan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [14] Learning symmetric causal independence models
    Rasa Jurgelenaite
    Tom Heskes
    Machine Learning, 2008, 71 : 133 - 153
  • [15] Domain Invariant Transfer Kernel Learning
    Long, Mingsheng
    Wang, Jianmin
    Sun, Jiaguang
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2015, 27 (06) : 1519 - 1532
  • [16] Learning Causal Models of Relational Domains
    Maier, Marc
    Taylor, Brian
    Oktay, Huseyin
    Jensen, David
    PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10), 2010, : 531 - 538
  • [17] Conditional learning through causal models
    Vandenburgh, Jonathan
    SYNTHESE, 2020, 199 (1-2) : 2415 - 2437
  • [18] Learning from Non-Causal Models
    Nappo, Francesco
    ERKENNTNIS, 2022, 87 (05) : 2419 - 2439
  • [19] Learning from Non-Causal Models
    Francesco Nappo
    Erkenntnis, 2022, 87 : 2419 - 2439
  • [20] Learning Causal Models under Independent Changes
    Mameche, Sarah
    Kaltenpoth, David
    Vreeken, Jilles
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,