Auditing black-box models for indirect influence

被引:126
作者
Adler, Philip [1 ]
Falk, Casey [1 ]
Friedler, Sorelle A. [1 ]
Nix, Tionney [1 ]
Rybeck, Gabriel [1 ]
Scheidegger, Carlos [2 ]
Smith, Brandon [1 ]
Venkatasubramanian, Suresh [3 ]
机构
[1] Haverford Coll, Dept Comp Sci, Haverford, PA 19041 USA
[2] Univ Arizona, Dept Comp Sci, Tucson, AZ 85721 USA
[3] Univ Utah, Dept Comp Sci, Salt Lake City, UT 84112 USA
基金
美国国家科学基金会;
关键词
Black-box auditing; ANOVA; Algorithmic accountability; Deep learning; Discrimination-aware data mining; Feature influence; Interpretable machine learning;
D O I
10.1007/s10115-017-1116-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models or asserting that certain problematic attributes (such as race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the data set, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if, for example, the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence such as feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available data sets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures. To further demonstrate the effectiveness of this technique, we use it to audit a black-box recidivism prediction algorithm.
引用
收藏
页码:95 / 122
页数:28
相关论文
共 50 条
[31]   Adaptive hyperparameter optimization for black-box adversarial attack [J].
Guan, Zhenyu ;
Zhang, Lixin ;
Huang, Bohan ;
Zhao, Bihe ;
Bian, Song .
INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2023, 22 (06) :1765-1779
[32]   Generating Black-Box Adversarial Examples in Sparse Domain [J].
Zanddizari, Hadi ;
Zeinali, Behnam ;
Chang, J. Morris .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (04) :795-804
[33]   What's inside the black-box? A genetic programming method for interpreting complex machine learning models [J].
Evans, Benjamin P. ;
Xue, Bing ;
Zhang, Mengjie .
PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'19), 2019, :1012-1020
[34]   Explaining the black-box model: A survey of local interpretation methods for deep neural networks [J].
Liang, Yu ;
Li, Siguang ;
Yan, Chungang ;
Li, Maozhen ;
Jiang, Changjun .
NEUROCOMPUTING, 2021, 419 :168-182
[35]   Exploiting auto-encoders for explaining black-box classifiers [J].
Guidotti, Riccardo .
INTELLIGENZA ARTIFICIALE, 2022, 16 (01) :115-129
[36]   HYBRID ADVERSARIAL SAMPLE CRAFTING FOR BLACK-BOX EVASION ATTACK [J].
Zheng, Juan ;
He, Zhimin ;
Lin, Zhe .
2017 INTERNATIONAL CONFERENCE ON WAVELET ANALYSIS AND PATTERN RECOGNITION (ICWAPR), 2017, :236-242
[37]   Black-box transferable adversarial attacks based on ensemble advGAN [J].
Huang S.-N. ;
Li Y.-X. ;
Mao Y.-H. ;
Ban A.-Y. ;
Zhang Z.-Y. .
Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2022, 52 (10) :2391-2398
[38]   Dynamic Black-Box Model Watermarking for Heterogeneous Federated Learning [J].
Liao, Yuying ;
Jiang, Rong ;
Zhou, Bin .
ELECTRONICS, 2024, 13 (21)
[39]   Besting the Black-Box: Barrier Zones for Adversarial Example Defense [J].
Mahmood, Kaleel ;
Phuong Ha Nguyen ;
Nguyen, Lam M. ;
Nguyen, Thanh ;
Van Dijk, Marten .
IEEE ACCESS, 2022, 10 :1451-1474
[40]   Black-box Adversarial Attack on License Plate Recognition System [J].
Chen J.-Y. ;
Shen S.-J. ;
Su M.-M. ;
Zheng H.-B. ;
Xiong H. .
Zidonghua Xuebao/Acta Automatica Sinica, 2021, 47 (01) :121-135