共 194 条
- [1] Song C, Ristenpart T, Shmatikov V., Machine learning models that remember too much, Proc. of the 2017 ACM SIGSAC Conf. on Computer and Communications Security, pp. 587-601, (2017)
- [2] Tramer F, Zhang F, Juels A, Et al., Stealing machine learning models via prediction apis, Proc. of the 25th {USENIX} Security Symp. ({USENIX} Security 2016), pp. 601-618, (2016)
- [3] Shen S, Tople S, Saxena P., A Uror: Defending against poisoning attacks in collaborative deep learning systems, Proc. of the 32nd Annual Conf. on Computer Security Applications, pp. 508-519, (2016)
- [4] Nelson B, Barreno M, Chi FJ, Et al., Exploiting machine learning to subvert your spam filter, LEET, 8, pp. 1-9, (2008)
- [5] Jagielski M, Oprea A, Biggio B, Et al., Manipulating machine learning: Poisoning attacks and countermeasures for regression learning, Proc. of the 2018 IEEE Symp. on Security and Privacy (SP), pp. 19-35, (2018)
- [6] Nelson B, Biggio B, Laskov P., Understanding the risk factors of learning in adversarial environments, AISec, 11, pp. 87-92, (2011)
- [7] Barreno M, Nelson B, Sears R, Et al., Can machine learning be secure?, Proc. of the 2006 ACM Symp. on Information, Computer and Communications Security, pp. 16-25, (2006)
- [8] Newsome J, Karp B, Song D., Paragraph: Thwarting signature learning by training maliciously, Proc. of the Int'l Workshop on Recent Advances in Intrusion Detection, pp. 81-105, (2006)
- [9] Rubinstein BI, Nelson B, Huang L, Et al., Antidote: Understanding and defending against poisoning of anomaly detectors, Proc. of the 9th ACM SIGCOMM Conf. on Internet Measurement, pp. 1-14, (2009)
- [10] Xiao H, Biggio B, Brown G, Et al., Is feature selection secure against training data poisoning?, Proc. of the Int'l Conf. on Machine Learning, pp. 1689-1698, (2015)