Black-Box Adversarial Attacks on Spiking Neural Network for Time Series Data

被引:0
作者
Hutchins, Jack [1 ]
Ferrer, Diego [1 ]
Fillers, James [1 ]
Schuman, Catherine [1 ]
机构
[1] Univ Tennessee, Dept EECS, Knoxville, TN 37996 USA
来源
2024 INTERNATIONAL CONFERENCE ON NEUROMORPHIC SYSTEMS, ICONS | 2024年
关键词
Spiking Neural Networks; Adversarial Attacks; Black-Box Attack; ROBUSTNESS;
D O I
10.1109/ICONS62911.2024.00040
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper examines the vulnerability of spiking neural networks (SNNs) trained on time series data to adversarial attacks by employing artificial neural networks as surrogate models. We specifically explore the use of a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network as surrogates to approximate the dynamics of SNNs. Through our comparative analysis, we found that the LSTM surrogate is particularly effective, reflecting the sequential data processing capabilities similar to SNNs. Using two adversarial attack methods, the Fast Gradient Sign Method (FGSM) and the Carlini & Wagner (C&W) attack, we demonstrate that adversarial examples can significantly degrade the performance of SNNs. Notably, both methods, especially when applied through the LSTM model, were able to reduce the accuracy of the SNN to below the level of random label choice, indicating a severe vulnerability. These results underscore the importance of incorporating robust defense mechanisms against such attacks in the design and deployment of neural networks handling time series data.
引用
收藏
页码:229 / 233
页数:5
相关论文
共 15 条
[1]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[2]   Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters [J].
El-Allami, Rida ;
Marchisio, Alberto ;
Shafique, Muhammad ;
Alouani, Ihsen .
PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, :774-779
[3]   Adversarial Attacks on Deep Neural Networks for Time Series Classification [J].
Fawaz, Hassan Ismail ;
Forestier, Germain ;
Weber, Jonathan ;
Idoumghar, Lhassane ;
Muller, Pierre-Alain .
2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
[4]   Universal adversarial attacks on deep neural networks for medical image classification [J].
Hirano, Hokuto ;
Minagi, Akinori ;
Takemoto, Kazuhiro .
BMC MEDICAL IMAGING, 2021, 21 (01)
[5]  
Kurakin A, 2017, Arxiv, DOI arXiv:1611.01236
[6]   Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient [J].
Liang, Ling ;
Hu, Xing ;
Deng, Lei ;
Wu, Yujie ;
Li, Guoqi ;
Ding, Yufei ;
Li, Peng ;
Xie, Yuan .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (05) :2569-2583
[7]  
Lord N. A., 2022, Attacking deep networks with surrogate-based adversarial black-box methods is easy
[8]   DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks [J].
Marchisio, Alberto ;
Pira, Giacomo ;
Martina, Maurizio ;
Masera, Guido ;
Shafique, Muhammad .
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
[9]  
Moody G.B., 1992, MIT BIH ARRHYTHMIA D
[10]  
Narodytska N., 2017, Simple black-box adversarial attacks on deep neural networks, V2, P2