共 50 条
- [31] Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 164 - 173
- [32] Natural Scene Statistics for Detecting Adversarial Examples in Deep Neural Networks 2020 IEEE 22ND INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2020,
- [33] Digital Watermark Perturbation for Adversarial Examples to Fool Deep Neural Networks 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
- [34] On the Robustness to Adversarial Examples of Neural ODE Image Classifiers 2019 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2019,
- [35] Analyzing the Robustness of Deep Learning Against Adversarial Examples 2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064
- [37] Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2909 - 2915
- [38] Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1660 - 1669