共 50 条
- [31] AdvQuNN: A Methodology for Analyzing the Adversarial Robustness of Quanvolutional Neural Networks 2024 IEEE INTERNATIONAL CONFERENCE ON QUANTUM SOFTWARE, IEEE QSW 2024, 2024, : 175 - 181
- [32] Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems Artificial Intelligence Review, 2023, 56 : 217 - 251
- [38] Fake News Detection via NLP is Vulnerable to Adversarial Attacks PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 794 - 800
- [39] The Impact of Model Variations on the Robustness of Deep Learning Models in Adversarial Settings 2024 SILICON VALLEY CYBERSECURITY CONFERENCE, SVCC 2024, 2024,
- [40] Adversarial Robustness for Deep Learning-Based Wildfire Prediction Models FIRE-SWITZERLAND, 2025, 8 (02):