Stochastic Computing as a Defence Against Adversarial Attacks

被引:0
|
作者
Neugebauer, Florian [1 ]
Vekariya, Vivek [2 ]
Polian, Ilia [1 ]
Hayes, John P. [3 ]
机构
[1] Univ Stuttgart, Inst Comp Architecture & Comp Engn, Stuttgart, Germany
[2] Fortiss GmbH, Munich, Germany
[3] Univ Michigan, Comp Engn Lab, Ann Arbor, MI 48109 USA
基金
美国国家科学基金会;
关键词
stochastic computing; neural network; adversarial attack;
D O I
10.1109/DSN-W58399.2023.00053
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Neural networks (NNs) are increasingly often employed in safety critical systems. It is therefore necessary to ensure that these NNs are robust against malicious interference in the form of adversarial attacks, which cause an NN to misclassify inputs. Many proposed defenses against such attacks incorporate randomness in order to make it harder for an attacker to find small input modifications that result in misclassification. Stochastic computing (SC) is a type of approximate computing based on pseudo-random bit-streams that has been successfully used to implement convolutional neural networks (CNNs). Some results have previously suggested that such stochastic CNNs (SCNNs) are partially robust against adversarial attacks. In this work, we will demonstrate that SCNNs do indeed possess inherent protection against some powerful adversarial attacks. Our results show that the white-box C&W attack is up to 16x less successful compared to an equivalent binary NN, and Boundary Attack even fails to generate adversarial inputs in many cases.
引用
收藏
页码:191 / 194
页数:4
相关论文
共 50 条
  • [41] Defense against adversarial attacks using DRAGAN
    ArjomandBigdeli, Ali
    Amirmazlaghani, Maryam
    Khalooei, Mohammad
    2020 6TH IRANIAN CONFERENCE ON SIGNAL PROCESSING AND INTELLIGENT SYSTEMS (ICSPIS), 2020,
  • [42] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [43] Optimal Transport as a Defense Against Adversarial Attacks
    Bouniot, Quentin
    Audigier, Romaric
    Loesch, Angelique
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5044 - 5051
  • [44] Defending Distributed Systems Against Adversarial Attacks
    Su L.
    Performance Evaluation Review, 2020, 47 (03): : 24 - 27
  • [45] Online Alternate Generator Against Adversarial Attacks
    Li, Haofeng
    Zeng, Yirui
    Li, Guanbin
    Lin, Liang
    Yu, Yizhou
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 9305 - 9315
  • [46] Binary thresholding defense against adversarial attacks
    Wang, Yutong
    Zhang, Wenwen
    Shen, Tianyu
    Yu, Hui
    Wang, Fei-Yue
    NEUROCOMPUTING, 2021, 445 : 61 - 71
  • [47] Adversarial Attacks Against Binary Similarity Systems
    Capozzi, Gianluca
    D'elia, Daniele Cono
    Di Luna, Giuseppe Antonio
    Querzoni, Leonardo
    IEEE ACCESS, 2024, 12 : 161247 - 161269
  • [48] On the Effectiveness of Adversarial Training Against Backdoor Attacks
    Gao, Yinghua
    Wu, Dongxian
    Zhang, Jingfeng
    Gan, Guanhao
    Xia, Shu-Tao
    Niu, Gang
    Sugiyama, Masashi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) : 14878 - 14888
  • [49] Defending against adversarial attacks by randomized diversification
    Taran, Olga
    Rezaeifar, Shideh
    Holotyak, Taras
    Voloshynovskiy, Slava
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11218 - 11225
  • [50] Adversarial Attacks Against IoT Identification Systems
    Kotak, Jaidip
    Elovici, Yuval
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (09) : 7868 - 7883