An Automated Approach to Causal Inference in Discrete Settings

被引:5
作者
Duarte, Guilherme [1 ]
Finkelstein, Noam [2 ]
Knox, Dean [1 ]
Mummolo, Jonathan [3 ,4 ]
Shpitser, Ilya [2 ]
机构
[1] Univ Penn, Wharton Sch, Operat Informat & Decis Dept, Philadelphia, PA 19104 USA
[2] Johns Hopkins Univ, Whiting Sch Engn, Dept Comp Sci, Baltimore, MD USA
[3] Princeton Univ, Dept Polit, Princeton, NJ USA
[4] Princeton Univ, Sch Publ & Int Affairs, Princeton, NJ USA
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
Causal inference; Constrained optimization; Partial identification; Linear programming; Polynomial programming; PRINCIPAL STRATIFICATION; BOUNDS; MARGINS;
D O I
10.1080/01621459.2023.2216909
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Applied research conditions often make it impossible to point-identify causal estimands without untenable assumptions. Partial identification-bounds on the range of possible solutions-is a principled alternative, but the difficulty of deriving bounds in idiosyncratic settings has restricted its application. We present a general, automated numerical approach to causal inference in discrete settings. We show causal questions with discrete data reduce to polynomial programming problems, then present an algorithm to automatically bound causal effects using efficient dual relaxation and spatial branch-and-bound techniques. The user declares an estimand, states assumptions, and provides data-however incomplete or mismeasured. The algorithm then searches over admissible data-generating processes and outputs the most precise possible range consistent with available information-that is, sharp bounds-including a point-identified solution if one exists. Because this search can be computationally intensive, our procedure reports and continually refines non-sharp ranges guaranteed to contain the truth at all times, even when the algorithm is not run to completion. Moreover, it offers an epsilon-sharpness guarantee, characterizing the worst-case looseness of the incomplete bounds. These techniques are implemented in our Python package, autobounds. Analytically validated simulations show the method accommodates classic obstacles-including confounding, selection, measurement error, noncompliance, and nonresponse. Supplementary materials for this article are available online.
引用
收藏
页码:1778 / 1793
页数:16
相关论文
共 42 条
  • [1] Angrist JD, 1996, J AM STAT ASSOC, V91, P444, DOI 10.2307/2291629
  • [2] [Anonymous], 2000, Models, reasoning and inference
  • [3] Bounds on treatment effects from studies with imperfect compliance
    Balke, A
    Pearl, J
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1997, 92 (439) : 1171 - 1176
  • [4] Branching and bounds tightening techniques for non-convex MINLP
    Belotti, Pietro
    Lee, Jon
    Liberti, Leo
    Margot, Francois
    Waechter, Andreas
    [J]. OPTIMIZATION METHODS & SOFTWARE, 2009, 24 (4-5) : 597 - 634
  • [5] Bienayme I. J., 1838, IMPRIMERIE ROYALE
  • [6] Variational Inference: A Review for Statisticians
    Blei, David M.
    Kucukelbir, Alp
    McAuliffe, Jon D.
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2017, 112 (518) : 859 - 877
  • [7] Bonet B., 2001, P 17 C UNC ART INT, P48, DOI DOI 10.5555/2074022.2074029
  • [8] Bounds on direct effects in the presence of confounded intermediate variables
    Cai, Zhihong
    Kuroki, Manabu
    Pearl, Judea
    Tian, Jin
    [J]. BIOMETRICS, 2008, 64 (03) : 695 - 701
  • [9] Dean T., 1988, AAAI 88. Seventh National Conference on Artificial Intelligence, P49
  • [10] MARGINS OF DISCRETE BAYESIAN NETWORKS
    Evans, Robin J.
    [J]. ANNALS OF STATISTICS, 2018, 46 (06) : 2623 - 2656