Invariant models for causal transfer learning

被引:0
|
作者
Rojas-Carulla, Mateo [1 ,2 ]
Schölkopf, Bernhard [1 ]
Turner, Richard [2 ]
Peters, Jonas [3 ]
机构
[1] Max Planck Institute for Intelligent Systems, Tubingen, Germany
[2] Department of Engineering, Univ. of Cambridge, Cambridge, United Kingdom
[3] Department of Mathematical Sciences, Univ. of Copenhagen, Copenhagen, Denmark
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Methods of transfer learning try to combine knowledge from several related tasks (or domains) to improve performance on a test task. Inspired by causal methodology, we relax the usual covariate shift assumption and assume that it holds true for a subset of predictor variables: The conditional distribution of the target variable given this subset of predictors is invariant over all tasks. We show how this assumption can be motivated from ideas in the field of causality. We focus on the problem of Domain Generalization, in which no examples from the test task are observed. We prove that in an adversarial setting using this subset for prediction is optimal in Domain Generalization; we further provide examples, in which the tasks are sufficiently diverse and the estimator therefore outperforms pooling the data, even on average. If examples from the test task are available, we also provide a method to transfer knowledge from the training tasks and exploit all available features for prediction. However, we provide no guarantees for this method. We introduce a practical method which allows for automatic inference of the above subset and provide corresponding code. We present results on synthetic data sets and a gene deletion data set. © 2018 Mateo Rojas-Carulla and Bernhard Scholkopf and Richard Turner and Jonas Peters.
引用
收藏
相关论文
共 50 条
  • [21] Structure Learning for Cyclic Linear Causal Models
    Amendola, Carlos
    Dettling, Philipp
    Drton, Mathias
    Onori, Federica
    Wu, Jun
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 999 - 1008
  • [22] Training Machine Learning Models With Causal Logic
    Li, Ang
    Chen, Suming J.
    Qin, Jingzheng
    Qin, Zhen
    WWW'20: COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2020, 2020, : 557 - 561
  • [23] On Learning Causal Models from Relational Data
    Lee, Sanghack
    Honavar, Vasant
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 3263 - 3270
  • [24] Invariant Feature Learning Based on Causal Inference from Heterogeneous Environments
    Su, Hang
    Wang, Wei
    MATHEMATICS, 2024, 12 (05)
  • [25] Learning Dynamics Models with Stable Invariant Sets
    Takeishi, Naoya
    Kawahara, Yoshinobu
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9782 - 9790
  • [26] Estimating individual-level optimal causal interventions combining causal models and machine learning models
    Kiritoshi, Keisuke
    Izumitani, Tomonori
    Koyama, Kazuki
    Okawachi, Tomomi
    Asahara, Keisuke
    Shimizu, Shohei
    KDD'21 WORKSHOP ON CAUSAL DISCOVERY, VOL 150, 2021, 150 : 55 - 77
  • [27] TRANSFER TENSOR METHOD APPLIED TO TRANSLATION INVARIANT MODELS
    SOBOTTA, G
    PHYSICA A, 1985, 130 (1-2): : 254 - 272
  • [28] Causal Models and Learning from Data Integrating Causal Modeling and Statistical Estimation
    Petersen, Maya L.
    van der Laan, Mark J.
    EPIDEMIOLOGY, 2014, 25 (03) : 418 - 426
  • [29] A Bayesian Theory of Sequential Causal Learning and Abstract Transfer
    Lu, Hongjing
    Rojas, Randall R.
    Beckers, Tom
    Yuille, Alan L.
    COGNITIVE SCIENCE, 2016, 40 (02) : 404 - 439
  • [30] On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources
    Trung Phung
    Trung Le
    Long Vuong
    Toan Tran
    Anh Tran
    Bui, Hung
    Dinh Phung
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34