共 15 条
[1]
Towards Evaluating the Robustness of Neural Networks
[J].
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP),
2017,
:39-57
[2]
Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters
[J].
PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021),
2021,
:774-779
[3]
Adversarial Attacks on Deep Neural Networks for Time Series Classification
[J].
2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN),
2019,
[5]
Kurakin A, 2017, Arxiv, DOI arXiv:1611.01236
[7]
Lord N. A., 2022, Attacking deep networks with surrogate-based adversarial black-box methods is easy
[8]
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks
[J].
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN),
2021,
[9]
Moody G.B., 1992, MIT BIH ARRHYTHMIA D
[10]
Narodytska N., 2017, Simple black-box adversarial attacks on deep neural networks, V2, P2