Invariant models for causal transfer learning

被引:0
|
作者
Rojas-Carulla, Mateo [1 ,2 ]
Schölkopf, Bernhard [1 ]
Turner, Richard [2 ]
Peters, Jonas [3 ]
机构
[1] Max Planck Institute for Intelligent Systems, Tubingen, Germany
[2] Department of Engineering, Univ. of Cambridge, Cambridge, United Kingdom
[3] Department of Mathematical Sciences, Univ. of Copenhagen, Copenhagen, Denmark
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Methods of transfer learning try to combine knowledge from several related tasks (or domains) to improve performance on a test task. Inspired by causal methodology, we relax the usual covariate shift assumption and assume that it holds true for a subset of predictor variables: The conditional distribution of the target variable given this subset of predictors is invariant over all tasks. We show how this assumption can be motivated from ideas in the field of causality. We focus on the problem of Domain Generalization, in which no examples from the test task are observed. We prove that in an adversarial setting using this subset for prediction is optimal in Domain Generalization; we further provide examples, in which the tasks are sufficiently diverse and the estimator therefore outperforms pooling the data, even on average. If examples from the test task are available, we also provide a method to transfer knowledge from the training tasks and exploit all available features for prediction. However, we provide no guarantees for this method. We introduce a practical method which allows for automatic inference of the above subset and provide corresponding code. We present results on synthetic data sets and a gene deletion data set. © 2018 Mateo Rojas-Carulla and Bernhard Scholkopf and Richard Turner and Jonas Peters.
引用
收藏
相关论文
共 50 条
  • [1] Invariant Models for Causal Transfer Learning
    Rojas-Carulla, Mateo
    Schoelkopf, Bernhard
    Turner, Richard
    Peters, Jonas
    JOURNAL OF MACHINE LEARNING RESEARCH, 2018, 19
  • [2] Invariant Causal Prediction for Nonlinear Models
    Heinze-Deml, Christina
    Peters, Jonas
    Meinshausen, Nicolai
    JOURNAL OF CAUSAL INFERENCE, 2018, 6 (02)
  • [3] Knowledge transfer for learning subject-specific causal models
    Rodriguez-Lopez, Veronica
    Enrique Sucar, Luis
    INTERNATIONAL CONFERENCE ON PROBABILISTIC GRAPHICAL MODELS, VOL 186, 2022, 186
  • [4] Invariant Policy Learning: A Causal Perspective
    Saengkyongam, Sorawit
    Thams, Nikolaj
    Peters, Jonas
    Pfister, Niklas
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 8606 - 8620
  • [5] Causal-Debias: Unifying Debiasing in Pretrained Language Models and Fine-tuning via Causal Invariant Learning
    Zhou, Fan
    Mao, Yuzhou
    Yu, Liu
    Yang, Yi
    Zhong, Ting
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 4227 - 4241
  • [6] Invariant Causal Imitation Learning for Generalizable Policies
    Bica, Ioana
    Jarrett, Daniel
    van der Schaar, Mihaela
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] Learning to Learn Causal Models
    Kemp, Charles
    Goodman, Noah D.
    Tenenbaum, Joshua B.
    COGNITIVE SCIENCE, 2010, 34 (07) : 1185 - 1243
  • [8] Causal models for learning technology
    Brokenshire, David
    Kumar, Vive
    8TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED LEARNING TECHNOLOGIES, PROCEEDINGS, 2008, : 262 - 264
  • [9] Graph Contrastive Invariant Learning from the Causal Perspective
    Mo, Yanhu
    Wang, Xiao
    Fan, Shaohua
    Shi, Chuan
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8, 2024, : 8904 - 8912
  • [10] Transfer Learning for Causal Sentence Detection
    Kyriakakis, Manolis
    Androutsopoulos, Ion
    Gines i Ametlle, Joan
    Saudabayev, Artur
    SIGBIOMED WORKSHOP ON BIOMEDICAL NATURAL LANGUAGE PROCESSING (BIONLP 2019), 2019, : 292 - 297