Stable learning establishes some common ground between causal inference and machine learning

被引:136
作者
Cui, Peng [1 ,2 ]
Athey, Susan [3 ]
机构
[1] Tsinghua Univ, Dept Comp Sci, Beijing, Peoples R China
[2] Beijing Acad Artificial Intelligence, Beijing, Peoples R China
[3] Stanford Univ, Grad Sch Business, Stanford, CA 94305 USA
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
MODELS;
D O I
10.1038/s42256-022-00445-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Causal inference has recently attracted substantial attention in the machine learning and artificial intelligence community. It is usually positioned as a distinct strand of research that can broaden the scope of machine learning from predictive modelling to intervention and decision-making. In this Perspective, however, we argue that ideas from causality can also be used to improve the stronghold of machine learning, predictive modelling, if predictive stability, explainability and fairness are important. With the aim of bridging the gap between the tradition of precise modelling in causal inference and black-box approaches from machine learning, stable learning is proposed and developed as a source of common ground. This Perspective clarifies a source of risk for machine learning models and discusses the benefits of bringing causality into learning. We identify the fundamental problems addressed by stable learning, as well as the latest progress from both causal inference and learning perspectives, and we discuss relationships with explainability and fairness problems. Machine learning performs well at predictive modelling based on statistical correlations, but for high-stakes applications, more robust, explainable and fair approaches are required. Cui and Athey discuss the benefits of bringing causal inference into machine learning, presenting a stable learning approach.
引用
收藏
页码:110 / 115
页数:6
相关论文
共 38 条
  • [1] Adragna Robert, 2020, FAIRNESS ROBUSTNESS
  • [2] Approximate residual balancing: debiased inference of average treatment effects in high dimensions
    Athey, Susan
    Imbens, Guido W.
    Wager, Stefan
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2018, 80 (04) : 597 - 623
  • [3] A Measure of Robustness to Misspecification
    Athey, Susan
    Imbens, Guido
    [J]. AMERICAN ECONOMIC REVIEW, 2015, 105 (05) : 476 - 480
  • [4] The Allocation of Decision Authority to Human and Artificial Intelligence
    Athey, Susan C.
    Bryan, Kevin A.
    Gans, Joshua S.
    [J]. AEA PAPERS AND PROCEEDINGS, 2020, 110 : 80 - 84
  • [5] When does E(Xk • Yl) = E(Xk) • E(Yl) imply independence?
    Bisgaard, TM
    Sasvári, Z
    [J]. STATISTICS & PROBABILITY LETTERS, 2006, 76 (11) : 1111 - 1116
  • [6] Corbett-Davies S., 2018, MEASURE MISMEASURE F
  • [7] Partial effects in probit and logit models with a triple dummy-variable interaction term
    Cornelissen, Thomas
    Sonderhof, Katja
    [J]. STATA JOURNAL, 2009, 9 (04) : 571 - 583
  • [8] Cui P., WHY STABLE LEARNING
  • [9] Dwork C., 2012, P 3 INN THEOR COMP, P214, DOI DOI 10.1145/2090236.2090255
  • [10] Gelman A., 2007, Data analysis using regression and multilevel/hierarchical models, P167