共 56 条
[21]
Quiring E, Rieck K., Backdooring and poisoning neural networks with image-scaling attacks, Proc. of the 2020 IEEE Security and Privacy Workshop, pp. 41-47, (2020)
[22]
Xiao QX, Chen YF, Shen C, Chen Y, Li K., Seeing is not believing: Camouflage attacks on image scaling algorithms, Proc. of the 28th USENIX Security Symp, pp. 443-460, (2019)
[23]
Wenger E, Passananti J, Bhagoji AN, Yao YS, Zheng HT, Zhao BY., Backdoor attacks against deep learning systems in the physical world, Proc. of the 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 3202-3211, (2021)
[24]
Bagdasaryan E, Shmatikov V., Blind backdoors in deep learning models, Proc. of the 30th USENIX Security Symp. USENIX Association, pp. 1505-1521, (2021)
[25]
Shumailov I, Shumaylov Z, Kazhdan D, Zhao YR, Papernot N, Erdogdu MA, Anderson RJ., Manipulating SGD with data ordering attacks, Proc. of the 34th Int’l Conf. on Neural Information Processing Systems, pp. 18021-18032, (2021)
[26]
Kurita K, Michel P, Neubig G., Weight poisoning attacks on pretrained models, Proc. of the 58th Annual Meeting of the Association for Computational Linguistics. ACL, pp. 2793-2806, (2020)
[27]
Dong YP, Yang X, Deng ZJ, Pang TY, Xiao ZH, Su H, Zhu J., Black-box detection of backdoor attacks with limited information and data, Proc. of the 2021 IEEE/CVF Int’l Conf. on Computer Vision, pp. 16462-16471, (2021)
[28]
Chou E, Tramer F, Pellegrino G., SentiNet: Detecting localized universal attacks against deep learning systems, Proc. of the 2020 IEEE Security and Privacy Workshop, pp. 48-54, (2020)
[29]
Chen HL, Fu C, Zhao JS, Koushanfar F., Deepinspect: A black-box Trojan detection and mitigation framework for deep neural networks, Proc. of the 28th Int’l Joint Conf. on Artificial Intelligence, pp. 4658-4664, (2019)
[30]
Shen GY, Liu YQ, Tao GH, An SW, Xu QL, Cheng SY, Ma SQ, Zhang XY., Backdoor scanning for deep neural networks through K-arm optimization, Proc. of the 38th Int’l Conf. on Machine Learning. PMLR, pp. 9525-9536, (2021)