共 50 条
- [1] Interpretability Analysis of Deep Neural Networks With Adversarial Examples Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (01): : 75 - 86
- [2] ADVERSARIAL WATERMARKING TO ATTACK DEEP NEURAL NETWORKS 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1962 - 1966
- [3] Safety Verification of Deep Neural Networks COMPUTER AIDED VERIFICATION, CAV 2017, PT I, 2017, 10426 : 3 - 29
- [4] Cocktail Universal Adversarial Attack on Deep Neural Networks COMPUTER VISION - ECCV 2024, PT LXV, 2025, 15123 : 396 - 412
- [5] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks SYMMETRY-BASEL, 2021, 13 (03):
- [6] Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1660 - 1669
- [7] ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions 24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 499 - 505
- [10] Survey on Testing of Deep Neural Networks Ruan Jian Xue Bao/Journal of Software, 2020, 31 (05): : 1255 - 1275