Enhancing can security with ML-based IDS: Strategies and efficacies against adversarial attacks

被引:1
作者
Lin, Ying-Dar [1 ]
Chan, Wei-Hsiang [1 ]
Lai, Yuan-Cheng [2 ]
Yu, Chia-Mu [3 ]
Wu, Yu-Sung [1 ]
Lee, Wei-Bin [4 ]
机构
[1] Natl Yang Ming Chiao Tung Univ, Dept Comp Sci, Hsinchu 300, Taiwan
[2] Natl Taiwan Univ Sci & Technol, Dept Informat Management, Taipei 10607, Taiwan
[3] Natl Yang Ming Chiao Tung Univ, Dept Elect & Elect Engn, Hsinchu 300, Taiwan
[4] Hon Hai Res Inst, Taipei, Taiwan
关键词
Adversarial attack; Machine learning; Intrusion detection; Distance-based optimization; Electronic vehicle;
D O I
10.1016/j.cose.2025.104322
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Control Area Networks (CAN) face serious security threats recently due to their inherent vulnerabilities and the increasing sophistication of cyberattacks targeting automotive and industrial systems. This paper focuses on enhancing the security of CAN, which currently lack adequate defense mechanisms. We propose integrating Machine Learning-based Intrusion Detection Systems (ML-based IDS) into the network to address this vulnerability. However, ML systems are susceptible to adversarial attacks, leading to misclassification of data. We introduce three defense combination methods to mitigate this risk: adversarial training, ensemble learning, and distance-based optimization. Additionally, we employ a simulated annealing algorithm in distance-based optimization to optimize the distance moved in feature space, aiming to minimize intra-class distance and maximize the inter-class distance. Our results show that the ZOO attack is the most potent adversarial attack, significantly impacting model performance. In terms of model, the basic models achieve an F1 score of 0.99, with CNN being the most robust against adversarial attacks. Under known adversarial attacks, the average F1 score decreases to 0.56. Adversarial training with triplet loss does not perform well, achieving only 0.64, while our defense method attains the highest F1 score of 0.97. For unknown adversarial attacks, the F1 score drops to 0.24, with adversarial training with triplet loss scoring 0.47. Our defense method still achieves the highest score of 0.61. These results demonstrate our method's excellent performance against known and unknown adversarial attacks.
引用
收藏
页数:13
相关论文
共 27 条
  • [1] [Anonymous], EV architecture
  • [2] Adversarial attacks on machine learning cybersecurity defences in Industrial Control Systems
    Anthi, Eirini
    Williams, Lowri
    Rhode, Matilda
    Burnap, Pete
    Wedgbury, Adam
    [J]. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2021, 58
  • [3] Ashraf S., 2020, 2020 INT C UK CHIN E, P1
  • [4] SIMULATED ANNEALING
    BERTSIMAS, D
    TSITSIKLIS, J
    [J]. STATISTICAL SCIENCE, 1993, 8 (01) : 10 - 15
  • [5] Chen P.- Y., 2017, P 10 ACM WORKSH ART, P15, DOI DOI 10.1145/3128572.3140448
  • [6] Chien Jen-Tzung, 2024, ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), P5365, DOI 10.1109/ICASSP48485.2024.10446746
  • [7] Demontis A, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P321
  • [8] Deng Y., 2024, Adv. Neural Inf. Process. Syst., V36
  • [9] DiNatale M, 2012, UNDERSTANDING AND USING THE CONTROLLER AREA NETWORK COMMUNICATION PROTOCOL: THEORY AND PRACTICE, P1, DOI 10.1007/978-1-4614-0314-2
  • [10] Boosting Adversarial Attacks with Momentum
    Dong, Yinpeng
    Liao, Fangzhou
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    Hu, Xiaolin
    Li, Jianguo
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 9185 - 9193