Bridging Machine Learning and Cryptography in Defence Against Adversarial Attacks

被引:8
|
作者
Taran, Olga [1 ]
Rezaeifar, Shideh [1 ]
Voloshynovskiy, Slava [1 ]
机构
[1] Univ Geneva, Dept Comp Sci, Geneva, Switzerland
关键词
Adversarial attacks; Defence; Data-independent transform; Secret key; Cryptography principle;
D O I
10.1007/978-3-030-11012-3_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many security- and trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function. Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs's cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios.
引用
收藏
页码:267 / 279
页数:13
相关论文
共 50 条
  • [1] Stochastic Computing as a Defence Against Adversarial Attacks
    Neugebauer, Florian
    Vekariya, Vivek
    Polian, Ilia
    Hayes, John P.
    2023 53RD ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS, DSN-W, 2023, : 191 - 194
  • [2] Model Agnostic Defence Against Backdoor Attacks in Machine Learning
    Udeshi, Sakshi
    Peng, Shanshan
    Woo, Gerald
    Loh, Lionell
    Rawshan, Louth
    Chattopadhyay, Sudipta
    IEEE TRANSACTIONS ON RELIABILITY, 2022, 71 (02) : 880 - 895
  • [3] Machine learning for automatic defence against Distributed Denial of Service attacks
    Seufert, Stefan
    O'Brien, Darragh
    2007 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, VOLS 1-14, 2007, : 1217 - 1222
  • [4] Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks
    Panda, Priyadarshini
    Chakraborty, Indranil
    Roy, Kaushik
    IEEE ACCESS, 2019, 7 : 70157 - 70168
  • [5] An Adversarial Machine Learning Model Against Android Malware Evasion Attacks
    Chen, Lingwei
    Hou, Shifu
    Ye, Yanfang
    Chen, Lifei
    WEB AND BIG DATA, 2017, 10612 : 43 - 55
  • [6] Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems
    Mumcu, Furkan
    Doshi, Keval
    Yilmaz, Yasin
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 205 - 212
  • [7] Addressing Adversarial Attacks Against Security Systems Based on Machine Learning
    Apruzzese, Giovanni
    Colajanni, Michele
    Ferretti, Luca
    Marchetti, Mirco
    2019 11TH INTERNATIONAL CONFERENCE ON CYBER CONFLICT (CYCON): SILENT BATTLE, 2019, : 383 - 400
  • [8] Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks
    Gurel, Nezihe Merve
    Qi, Xiangyu
    Rimanic, Luka
    Zhang, Ce
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [9] A Network Security Classifier Defense: Against Adversarial Machine Learning Attacks
    De Lucia, Michael J.
    Cotton, Chase
    PROCEEDINGS OF THE 2ND ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING, WISEML 2020, 2020, : 67 - 73
  • [10] Adversarial attacks on medical machine learning
    Finlayson, Samuel G.
    Bowers, John D.
    Ito, Joichi
    Zittrain, Jonathan L.
    Beam, Andrew L.
    Kohane, Isaac S.
    SCIENCE, 2019, 363 (6433) : 1287 - 1289