secml: Secure and explainable machine learning in Python']Python

被引:4
作者
Pintor, Maura [1 ,2 ]
Demetrio, Luca [1 ,2 ]
Sotgiu, Angelo [1 ,2 ]
Melis, Marco [1 ]
Demontis, Ambra [1 ]
Biggio, Battista [1 ,2 ]
机构
[1] Univ Cagliari, DIEE, Via Marengo, Cagliari, Italy
[2] Pluribus One, Via Vincenzo Bellini 9, Cagliari, Italy
基金
欧盟地平线“2020”;
关键词
Machine learning; Security; Adversarial attacks; Explainability; !text type='Python']Python[!/text]3;
D O I
10.1016/j.softx.2022.101095
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including test-time evasion attacks to generate adversarial examples against deep neural networks and training-time poisoning attacks against support vector machines and many other algorithms. These attacks enable evaluating the security of learning algorithms and the corresponding defenses under both white-box and black-box threat models. To this end, secml provides built-in functions to compute security evaluation curves, showing how quickly classification performance decreases against increasing adversarial perturbations of the input data. secml also includes explainability methods to help understand why adversarial attacks succeed against a given model, by visualizing the most influential features and training prototypes contributing to each decision. It is distributed under the Apache License 2.0 and hosted at https://github.com/pralab/secml. ?? 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页数:4
相关论文
共 22 条
  • [1] [Anonymous], 2017, CoRR
  • [2] Biggio Battista, 2013, Machine Learning and Knowledge Discovery in Databases. European Conference, ECML PKDD 2013. Proceedings: LNCS 8190, P387, DOI 10.1007/978-3-642-40994-3_25
  • [3] Biggio B., 2012, P 29 INT COF INT C M, P1467, DOI 10.48550/arxiv.1206.6389
  • [4] Wild patterns: Ten years after the rise of adversarial machine learning
    Biggio, Battista
    Roli, Fabio
    [J]. PATTERN RECOGNITION, 2018, 84 : 317 - 331
  • [5] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [6] Christian Szegedy, 2014, 9789568066154 MIN EN
  • [7] Demontis A, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P321
  • [8] BadNets: Evaluating Backdooring Attacks on Deep Neural Networks
    Gu, Tianyu
    Liu, Kang
    Dolan-Gavitt, Brendan
    Garg, Siddharth
    [J]. IEEE ACCESS, 2019, 7 : 47230 - 47244
  • [9] Huang L., 2011, P 4 ACM WORKSH SEC A, P43, DOI [DOI 10.1145/2046684.2046692, 10.1145/2046684.2046692]
  • [10] Joseph Anthony D., 2018, ADVERSARIAL MACHINE