Enhancing Adversarial Robustness for SVM Based on Lagrange Duality

被引:0
作者
Liu, Yuting [1 ]
Gu, Hong [1 ]
Qin, Pan [1 ]
机构
[1] Dalian Univ Technol, Sch Control Sci & Engn, Dalian, Peoples R China
来源
2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024 | 2024年
关键词
support vector machine; adversarial robustness; certified defense; Lagrange duality;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples generated by adversarial attacks bring security threats to the application of machine learning models. Certified defense can improve the adversarial robustness of the model against various adversarial attacks. Despite substantial research efforts to enhance the adversarial robustness of models in recent years, the focus has mainly been on deep neural networks. However, it is crucial to extend this research to include classic models such as support vector machines (SVM), which remain important even in the era of deep learning. Therefore, the issue of certified defense of SVM needs to be paid attention to. In this paper, a verified training SVM method (VT-SVM) based on Lagrange duality is proposed. The proposed method incorporates adversarial robustness into the SVM learning framework. Experimental results are provided to demonstrate the efficacy of the proposed method in ensuring both accurate predictions and enhanced adversarial robustness of the model.
引用
收藏
页码:65 / 68
页数:4
相关论文
共 31 条
  • [1] Design network intrusion detection system using support vector machine
    Ajdani, Mahdi
    Ghaffary, Hamidreza
    [J]. INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2021, 34 (03)
  • [2] Amir G, 2021, LECT NOTES COMPUT SC, V12652, P203, DOI 10.1007/978-3-030-72013-1_11
  • [3] Andersen M., 2023, Python software for convex optimization
  • [4] Dvijotham K, 2018, UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, P550
  • [5] Role of machine learning in medical research: A survey
    Garg, Arunim
    Mago, Vijay
    [J]. COMPUTER SCIENCE REVIEW, 2021, 40
  • [6] AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
    Gehr, Timon
    Mirman, Matthew
    Drachsler-Cohen, Dana
    Tsankov, Petar
    Chaudhuri, Swarat
    Vechev, Martin
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, : 3 - 18
  • [7] Henzinger TA, 2021, AAAI CONF ARTIF INTE, V35, P3787
  • [8] Goodfellow IJ, 2015, Arxiv, DOI arXiv:1412.6572
  • [9] Jia K., 2020, Advances in neural information processing systems, V33, P1782
  • [10] The Marabou Framework for Verification and Analysis of Deep Neural Networks
    Katz, Guy
    Huang, Derek A.
    Ibeling, Duligur
    Julian, Kyle
    Lazarus, Christopher
    Lim, Rachel
    Shah, Parth
    Thakoor, Shantanu
    Wu, Haoze
    Zeljic, Aleksandar
    Dill, David L.
    Kochenderfer, Mykel J.
    Barrett, Clark
    [J]. COMPUTER AIDED VERIFICATION, CAV 2019, PT I, 2019, 11561 : 443 - 452