DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks

被引:5
|
作者
Marchisio, Alberto [1 ]
Pira, Giacomo [2 ]
Martina, Maurizio [2 ]
Masera, Guido [2 ]
Shafique, Muhammad [3 ]
机构
[1] Tech Univ Wien, Vienna, Austria
[2] Politecn Torino, Turin, Italy
[3] New York Univ, Abu Dhabi, U Arab Emirates
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
关键词
Spiking Neural Networks; SNNs; Deep Learning; Adversarial Attacks; Security; Robustness; Defense; Filter; Perturbation; Noise; Dynamic Vision Sensors; DVS; Neuromorphic; Event-Based;
D O I
10.1109/IJCNN52387.2021.9534364
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs), despite being energy-efficient when implemented on neuromorphic hardware and coupled with event-based Dynamic Vision Sensors (DVS), are vulnerable to security threats, such as adversarial attacks, i.e., small perturbations added to the input for inducing a misclassification. Toward this, we propose DVS-Attacks, a set of stealthy yet efficient adversarial attack methodologies targeted to perturb the event sequences that compose the input of the SNNs. First, we show that noise filters for DVS can be used as defense mechanisms against adversarial attacks. Afterwards, we implement several attacks and test them in the presence of two types of noise filters for DVS cameras. The experimental results show that the filters can only partially defend the SNNs against our proposed DVS-Attacks. Using the best settings for the noise filters, our proposed Mask Filter-Aware Dash Attack reduces the accuracy by more than 20% on the DVS-Gesture dataset and by more than 65% on the MNIST dataset, compared to the original clean frames. The source code of all the proposed DVS-Attacks and noise filters is released at https://github.com/albertomarchisio/DVS-Attacks.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] R-SNN: An Analysis and Design Methodology for Robustifying Spiking Neural Networks against Adversarial Attacks through Noise Filters for Dynamic Vision Sensors
    Marchisio, Alberto
    Pira, Giacomo
    Martina, Maurizio
    Masera, Guido
    Shafique, Muhammad
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 6315 - 6321
  • [2] Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters
    El-Allami, Rida
    Marchisio, Alberto
    Shafique, Muhammad
    Alouani, Ihsen
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 774 - 779
  • [3] Adversarial attacks on spiking convolutional neural networks for event-based vision
    Buechel, Julian
    Lenz, Gregor
    Hu, Yalun
    Sheik, Sadique
    Sorbaro, Martino
    FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [4] Exploring Vulnerabilities in Spiking Neural Networks: Direct Adversarial Attacks on Raw Event Data
    Yao, Yanmeng
    Zhao, Xiaohan
    Gu, Bin
    COMPUTER VISION - ECCV 2024, PT LXXII, 2025, 15130 : 412 - 428
  • [5] The Adversarial Attacks Threats on Computer Vision: A Survey
    Zhou, Yiyun
    Han, Meng
    Liu, Liyuan
    He, Jing
    Gao, Xi
    2019 IEEE 16TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SENSOR SYSTEMS WORKSHOPS (MASSW 2019), 2019, : 25 - 30
  • [6] Exploring misclassifications of robust neural networks to enhance adversarial attacks
    Leo Schwinn
    René Raab
    An Nguyen
    Dario Zanca
    Bjoern Eskofier
    Applied Intelligence, 2023, 53 : 19843 - 19859
  • [7] Exploring misclassifications of robust neural networks to enhance adversarial attacks
    Schwinn, Leo
    Raab, Rene
    Nguyen, An
    Zanca, Dario
    Eskofier, Bjoern
    APPLIED INTELLIGENCE, 2023, 53 (17) : 19843 - 19859
  • [8] Robustness of Spiking Neural Networks Based on Time-to-First-Spike Encoding Against Adversarial Attacks
    Nomura, Osamu
    Sakemi, Yusuke
    Hosomi, Takeo
    Morie, Takashi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (09) : 3640 - 3644
  • [9] SIT: Stochastic Input Transformation to Defend Against Adversarial Attacks on Deep Neural Networks
    Guesmi, Amira
    Alouani, Ihsen
    Baklouti, Mouna
    Frikha, Tarek
    Abid, Mohamed
    IEEE DESIGN & TEST, 2022, 39 (03) : 63 - 72
  • [10] On the Robustness of Bayesian Neural Networks to Adversarial Attacks
    Bortolussi, Luca
    Carbone, Ginevra
    Laurenti, Luca
    Patane, Andrea
    Sanguinetti, Guido
    Wicker, Matthew
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14