A generic approach for reproducible model distillation

被引:1
作者
Zhou, Yunzhe [1 ]
Xu, Peiru [2 ]
Hooker, Giles [3 ]
机构
[1] Univ Calif Berkeley, Dept Psychol, 2121 Berkeley Way, Berkeley, CA 94720 USA
[2] Univ Calif Berkeley, Dept Math, Evans Hall, Berkeley, CA 94720 USA
[3] Univ Penn, Dept Stat & Data Sci, Acad Res Bldg, Philadelphia, PA 19104 USA
基金
美国国家科学基金会;
关键词
Interpretability; Model distillation; Multiple testing; Reproducibility; Explainable artificial intelligence;
D O I
10.1007/s10994-024-06597-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Model distillation has been a popular method for producing interpretable machine learning. It uses an interpretable "student" model to mimic the predictions made by the black box "teacher" model. However, when the student model is sensitive to the variability of the data sets used for training even when keeping the teacher fixed, the corresponded interpretation is not reliable. Existing strategies stabilize model distillation by checking whether a large enough sample of pseudo-data is generated to reliably reproduce student models, but methods to do so have so far been developed separately for each specific class of student model. In this paper, we develop a generic approach for stable model distillation based on central limit theorem for the estimated fidelity of the student to the teacher. We start with a collection of candidate student models and search for candidates that reasonably agree with the teacher. Then we construct a multiple testing framework to select a sample size such that the consistent student model would be selected under different pseudo samples. We demonstrate the application of our proposed approach on three commonly used intelligible models: decision trees, falling rule lists and symbolic regression. Finally, we conduct simulation experiments on Mammographic Mass and Breast Cancer datasets and illustrate the testing procedure throughout a theoretical analysis with Markov process. The code is publicly available at https://github.com/yunzhe-zhou/GenericDistillation.
引用
收藏
页码:7645 / 7688
页数:44
相关论文
共 44 条
  • [1] Affenzeller M., 2013, Genetic Programming Theory and Practice XI, P175, DOI DOI 10.1007/978-1-4939-0375-710
  • [2] [Anonymous], 2015, arXiv preprint arXiv:1503.02531
  • [3] Symbolic regression via genetic programming
    Augusto, DA
    Barbosa, HJC
    [J]. SIXTH BRAZILIAN SYMPOSIUM ON NEURAL NETWORKS, VOL 1, PROCEEDINGS, 2000, : 173 - 178
  • [4] Bishop C ..., 1994, Mixture Density Networks
  • [5] SmcHD1, containing a structural-maintenance-of-chromosomes hinge domain, has a critical role in X inactivation
    Blewitt, Marnie E.
    Gendrel, Anne-Valerie
    Pang, Zhenyi
    Sparrow, Duncan B.
    Whitelaw, Nadia
    Craig, Jeffrey M.
    Apedaile, Anwyn
    Hilton, Douglas J.
    Dunwoodie, Sally L.
    Brockdorff, Neil
    Kay, Graham F.
    Whitelaw, Emma
    [J]. NATURE GENETICS, 2008, 40 (05) : 663 - 669
  • [6] Automated reverse engineering of nonlinear dynamical systems
    Bongard, Josh
    Lipson, Hod
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2007, 104 (24) : 9943 - 9948
  • [7] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [8] Breiman L., 2002, MANUAL SETTING USING, V1
  • [9] Generative Adversarial Networks An overview
    Creswell, Antonia
    White, Tom
    Dumoulin, Vincent
    Arulkumaran, Kai
    Sengupta, Biswa
    Bharath, Anil A.
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (01) : 53 - 65
  • [10] Donsker M. D., 1951, INVARIANCE PRINCIPLE