DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks

被引:5
|
作者
Marchisio, Alberto [1 ]
Pira, Giacomo [2 ]
Martina, Maurizio [2 ]
Masera, Guido [2 ]
Shafique, Muhammad [3 ]
机构
[1] Tech Univ Wien, Vienna, Austria
[2] Politecn Torino, Turin, Italy
[3] New York Univ, Abu Dhabi, U Arab Emirates
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
关键词
Spiking Neural Networks; SNNs; Deep Learning; Adversarial Attacks; Security; Robustness; Defense; Filter; Perturbation; Noise; Dynamic Vision Sensors; DVS; Neuromorphic; Event-Based;
D O I
10.1109/IJCNN52387.2021.9534364
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs), despite being energy-efficient when implemented on neuromorphic hardware and coupled with event-based Dynamic Vision Sensors (DVS), are vulnerable to security threats, such as adversarial attacks, i.e., small perturbations added to the input for inducing a misclassification. Toward this, we propose DVS-Attacks, a set of stealthy yet efficient adversarial attack methodologies targeted to perturb the event sequences that compose the input of the SNNs. First, we show that noise filters for DVS can be used as defense mechanisms against adversarial attacks. Afterwards, we implement several attacks and test them in the presence of two types of noise filters for DVS cameras. The experimental results show that the filters can only partially defend the SNNs against our proposed DVS-Attacks. Using the best settings for the noise filters, our proposed Mask Filter-Aware Dash Attack reduces the accuracy by more than 20% on the DVS-Gesture dataset and by more than 65% on the MNIST dataset, compared to the original clean frames. The source code of all the proposed DVS-Attacks and noise filters is released at https://github.com/albertomarchisio/DVS-Attacks.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Siamese Networks and Adversarial Attacks: An Overview
    Shaw, Laxmi
    Hebli, Pavan P.
    Ekin, Tahir
    2024 IEEE INTERNATIONAL CONFERENCE ON PROGNOSTICS AND HEALTH MANAGEMENT, ICPHM 2024, 2024, : 377 - 384
  • [22] Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks
    Smagulova, Kamilya
    Bacha, Lina
    Fouda, Mohammed E.
    Kanj, Rouwaida
    Eltawil, Ahmed
    ELECTRONICS, 2024, 13 (03)
  • [23] Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks
    Uchendu, Adaku
    Campoy, Daniel
    Menart, Christopher
    Hildenbrandt, Alexandra
    2021 IEEE FOURTH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2021), 2021, : 72 - 80
  • [24] Adversarial Attacks on Deep Learning Models of Computer Vision: A Survey
    Ding, Jia
    Xu, Zhiwu
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT III, 2020, 12454 : 396 - 408
  • [25] SeVuc: A study on the Security Vulnerabilities of Capsule Networks against adversarial attacks
    Marchisio, Alberto
    Nanfa, Giorgio
    Khalid, Faiq
    Hanif, Muhammad Abdullah
    Martina, Maurizio
    Shafique, Muhammad
    MICROPROCESSORS AND MICROSYSTEMS, 2023, 96
  • [26] Dynamic Hypersphere Embedding Scale Against Adversarial Attacks
    Hassanin, Mohammed
    Moustafa, Nour
    Razzak, Imran
    Tanveer, M.
    Ormrod, David
    Slay, Jill
    IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, 2024, 71 : 12475 - 12486
  • [27] Robust Graph Convolutional Networks Against Adversarial Attacks
    Zhu, Dingyuan
    Zhang, Ziwei
    Cui, Peng
    Zhu, Wenwu
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1399 - 1407
  • [28] Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks
    Guo, Haoqiang
    Peng, Lu
    Zhang, Jian
    Qi, Fang
    Duan, Lide
    2019 TENTH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2019,
  • [29] Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks
    Das, Nilaksh
    Park, Haekyu
    Wang, Zijie J.
    Hohman, Fred
    Firstman, Robert
    Rogers, Emily
    Chau, Duen Horng
    2020 IEEE VISUALIZATION CONFERENCE - SHORT PAPERS (VIS 2020), 2020, : 271 - 275
  • [30] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    Progress in Artificial Intelligence, 2022, 11 : 131 - 141