共 26 条
- [1] HE K M, ZHANG X G, REN S Q, Et al., Identity mappings in deep residual networks, 2016 14th European Conference on Computer Vision, pp. 630-645, (2016)
- [2] HE K M, GKIOXARI G, DOLLAR P, Et al., Mask R-CNN, 2017 IEEE International Conference on Computer Vision, pp. 2980-2988, (2017)
- [3] GOODFELLOW I J, SHLENS J, SZEGEDY C., Explaining and harnessing adversarial examples, 2015 3rd International Conference on Learning Representations, pp. 1-11, (2015)
- [4] LIU Y P, CHEN X Y, LIU C, Et al., Delving into transferable adversarial examples and black-box attacks, 2017 5th International Conference on Learning Representations, pp. 1-24, (2017)
- [5] ATHALYE A, ENGSTROM L, ILYAS A, Et al., Synthesizing robust adversarial examples, 2018 35th International Conference on Machine Learning, pp. 284-293, (2018)
- [6] TRAMER F, KURAKIN A, PAPERNOT N, Et al., Ensemble adversarial training: Attacks and defenses, 2018 6th International Conference on Learning Representations, pp. 1-22, (2018)
- [7] LIAO F Z, LIANG M, DONG Y P, Et al., Defense against adversarial attacks using high-level representation guided denoiser, 2018 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778-1787, (2018)
- [8] XIE C H, WANG J Y, ZHANG Z S, Et al., Mitigating adversarial effects through randomization, 2018 6th International Conference on Learning Representations, pp. 1-16, (2018)
- [9] RAGHUNATHAN A, STEINHARDT J, LIANG P., Certified defenses against adversarial examples, 2018 6th International Conference on Learning Representations, pp. 1-15, (2018)
- [10] RAUBER J, BRENDEL W, BETHGE M., Foolbox v0.8.0: A python toolbox to benchmark the robustness of machine learning models, (2020)