共 293 条
[61]
Feinman R., 2017, ARXIV
[62]
Robustness Verification Boosting for Deep Neural Networks
[J].
2019 6TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2019),
2019,
:531-535
[63]
Folz J, 2020, IEEE WINT CONF APPL, P3568, DOI [10.1109/WACV45572.2020.9093310, 10.1109/wacv45572.2020.9093310]
[64]
Freitas S., 2020, ARXIV
[65]
Adversarial Perturbations Fool Deepfake Detectors
[J].
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN),
2020,
[66]
Gao J, 2017, ArXiv
[67]
Gao Y., 2020, ARXIV
[68]
Ghosh P, 2019, AAAI CONF ARTIF INTE, P541
[69]
github, GitHub-jason71995/adversarialattack: Adversarial Attack on Keras and Tensorflow 2.0
[70]
github, GitHub-Trusted-AI/adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART)-Python Library for Machine Learning Security-Evasion, Poisoning, Extraction, Inference-Red and Blue Teams