共 35 条
- [1] Dong YP, Su H, Zhu J., Towards interpretable deep neural networks by leveraging adversarial examples, Acta Automatica Sinica, 48, 1, (2020)
- [2] Huang X, Kwiatkowska M, Wang S, Wu M., Safety verification of deep neural networks, Proc. of the 29th Int’l Conf. on Computer Aided Verification, pp. 3-29, (2017)
- [3] Wang Z, Yan M, Liu S, Chen JJ, Zhang DD, Wu Z, Chen X., Survey on testing of deep neural networks, Ruan Jian Xue Bao/Journal of Software, 31, 5, pp. 1255-1275, (2020)
- [4] Liu C, Arnon T, Lazarus C, Strong C, Barrett C, Kochenderfer MJ., Algorithms for verifying deep neural networks, Foundations and Trends® in Optimization, 4, 3-4, (2021)
- [5] Li L, Qi X, Xie T, Li B., SoK: Certified robustness for deep neural networks, (2020)
- [6] Huang X, Kroening D, Ruan W, Sharp J, Sun Y, Thamo E, Wu M, Yi X., A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability, Computer Science Review, 37, (2020)
- [7] Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ., Reluplex: An efficient SMT solver for verifying deep neural networks, Proc. of the 29th Int’l Conf. on Computer Aided Verification, pp. 97-117, (2017)
- [8] Singh G, Gehr T, Mirman M, Puschel M, Vechev M., Fast and effective robustness certification, Advances in Neural Information Processing Systems 31: Annual Conf. on Neural Information Processing Systems, pp. 10825-10836, (2018)
- [9] Tran HD, Manzanas Lopez D, Musau P, Yang X, Nguyen LV, Xiang W, Johnson TT., Star-based reachability analysis for deep neural networks, Proc. of the 23rd Int’l Symp. on Formal Methods (FM 2019), pp. 670-686, (2019)
- [10] Raghunathan A, Steinhardt J, Liang P., Semidefinite relaxations for certifying robustness to adversarial examples, Advances in Neural Information Processing Systems 31: Annual Conf. on Neural Information Processing Systems, (2018)