DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks

被引:5
|
作者
Marchisio, Alberto [1 ]
Pira, Giacomo [2 ]
Martina, Maurizio [2 ]
Masera, Guido [2 ]
Shafique, Muhammad [3 ]
机构
[1] Tech Univ Wien, Vienna, Austria
[2] Politecn Torino, Turin, Italy
[3] New York Univ, Abu Dhabi, U Arab Emirates
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
关键词
Spiking Neural Networks; SNNs; Deep Learning; Adversarial Attacks; Security; Robustness; Defense; Filter; Perturbation; Noise; Dynamic Vision Sensors; DVS; Neuromorphic; Event-Based;
D O I
10.1109/IJCNN52387.2021.9534364
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs), despite being energy-efficient when implemented on neuromorphic hardware and coupled with event-based Dynamic Vision Sensors (DVS), are vulnerable to security threats, such as adversarial attacks, i.e., small perturbations added to the input for inducing a misclassification. Toward this, we propose DVS-Attacks, a set of stealthy yet efficient adversarial attack methodologies targeted to perturb the event sequences that compose the input of the SNNs. First, we show that noise filters for DVS can be used as defense mechanisms against adversarial attacks. Afterwards, we implement several attacks and test them in the presence of two types of noise filters for DVS cameras. The experimental results show that the filters can only partially defend the SNNs against our proposed DVS-Attacks. Using the best settings for the noise filters, our proposed Mask Filter-Aware Dash Attack reduces the accuracy by more than 20% on the DVS-Gesture dataset and by more than 65% on the MNIST dataset, compared to the original clean frames. The source code of all the proposed DVS-Attacks and noise filters is released at https://github.com/albertomarchisio/DVS-Attacks.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Neural Adversarial Attacks with Random Noises
    Hajri, Hatem
    Cesaire, Manon
    Schott, Lucas
    Lamprier, Sylvain
    Gallinari, Patrick
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2023, 32 (05)
  • [42] Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification
    Alzaidy, Sharoug
    Binsalleeh, Hamad
    APPLIED SCIENCES-BASEL, 2024, 14 (04):
  • [43] Comparison of the Resilience of Convolutional and Cellular Neural Networks Against Adversarial Attacks
    Horvath, Andras
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 2348 - 2352
  • [44] Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
    Sardar, Nida
    Khan, Sundas
    Hintze, Arend
    Mehra, Priyanka
    ENTROPY, 2023, 25 (06)
  • [45] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, Jose
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [46] Grasping Adversarial Attacks on Deep Convolutional Neural Networks for Cholangiocarcinoma Classification
    Diyasa, I. Gede Susrama Mas
    Wahid, Radical Rakhman
    Amiruddin, Brilian Putra
    2021 INTERNATIONAL CONFERENCE ON E-HEALTH AND BIOENGINEERING (EHB 2021), 9TH EDITION, 2021,
  • [47] Universal adversarial attacks on deep neural networks for medical image classification
    Hokuto Hirano
    Akinori Minagi
    Kazuhiro Takemoto
    BMC Medical Imaging, 21
  • [48] Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks
    Liu, Jia
    Jin, Yaochu
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1778 - 1785
  • [49] Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
    Siddique, Ayesha
    Hoque, Khaza Anuarul
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 364 - 369
  • [50] A Data Augmentation-Based Defense Method Against Adversarial Attacks in Neural Networks
    Zeng, Yi
    Qiu, Han
    Memmi, Gerard
    Qiu, Meikang
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT II, 2020, 12453 : 274 - 289