共 50 条
- [1] R-SNN: An Analysis and Design Methodology for Robustifying Spiking Neural Networks against Adversarial Attacks through Noise Filters for Dynamic Vision Sensors 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 6315 - 6321
- [2] Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 774 - 779
- [4] Exploring Vulnerabilities in Spiking Neural Networks: Direct Adversarial Attacks on Raw Event Data COMPUTER VISION - ECCV 2024, PT LXXII, 2025, 15130 : 412 - 428
- [5] The Adversarial Attacks Threats on Computer Vision: A Survey 2019 IEEE 16TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SENSOR SYSTEMS WORKSHOPS (MASSW 2019), 2019, : 25 - 30
- [6] Exploring misclassifications of robust neural networks to enhance adversarial attacks Applied Intelligence, 2023, 53 : 19843 - 19859