Adversarial Attacks in Industrial Control Cyber Physical Systems

被引:2
作者
Figueroa, Henry [1 ]
Wang, Yi [1 ]
Giakos, George C. [1 ]
机构
[1] Manhattan Coll, Elect & Comp Engn Dept, Bronx, NY 10471 USA
来源
2022 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST 2022) | 2022年
关键词
adversarial attack; cyber physical systems; vulnerable system; critical systems; industrial control;
D O I
10.1109/IST55454.2022.9827763
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning and deep learning algorithms have been the frontier of artificial intelligence (AI) that reshape the current landscape of computing. However, the trustworthiness and reliability of AI models is a growing concern after they are being widely deployed. Adversarial attacks are cyber-related attacks that target machine learning and deep learning algorithms so that cause the training of the network to be fooled, and inaccurate predictions to be made. As result, adversarial attacks can affect critical AI systems, such as industrial control cyber-physical systems. They rely on machine learning models to perform day to day functions, this alone makes them prone to adversarial attacks since machine learning models are highly vulnerable to adversarial examples. Research on the consequences of these types of attacks can lead to insight on what can be done to prevent these malicious attacks if they were to be imposed on a vulnerable system. In this study, three adversarial cyber-attacks, specifically on power systems, are presented. Specifically, the Fast Gradient Sign Method, DeepFool, and Jacobian-Based Saliency Map Attacks were utilized to generate adversarial examples for machine learning and deep learning. The outcome of this study clearly indicates that adversarial attacks have negative implications on the performance of deep neural networks of cyber physical systems.
引用
收藏
页数:6
相关论文
共 14 条
[1]   Effect of additives and aging on moisture-induced damage potential of asphalt mixes using surface free energy and laboratory-based performance tests [J].
Ali, Syed Ashik ;
Zaman, Musharraf ;
Ghabchi, Rouzbeh ;
Rahman, Mohammad Ashiqur ;
Ghos, Sagar ;
Rani, Shivani .
INTERNATIONAL JOURNAL OF PAVEMENT ENGINEERING, 2022, 23 (02) :285-296
[2]  
Alzantot M., DID YOU HEAR THAT AD, V02
[3]  
[Anonymous], 2019, CYB PHYS SYST
[4]  
[Anonymous], Adversarial Example Generation - PyTorch Tutorials 2.2.1+cu121 documentation
[5]  
Anthi E., 2020, ADVERSARIAL ATTACKS
[6]   Generating Adversarial Examples Against Machine Learning-Based Intrusion Detector in Industrial Control Systems [J].
Chen, Jiming ;
Gao, Xiangshan ;
Deng, Ruilong ;
He, Yang ;
Fang, Chongrong ;
Cheng, Peng .
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (03) :1810-1825
[7]  
Hink RCB, 2014, INT SYMP RESIL CONTR
[8]  
Kapelke C., 2019, ADVERSARIAL MACHINE
[9]  
Pan S., 2015, IJ Network Security, V17, P174
[10]   Developing a Hybrid Intrusion Detection System Using Data Mining for Power Systems [J].
Pan, Shengyi ;
Morris, Thomas ;
Adhikari, Uttam .
IEEE TRANSACTIONS ON SMART GRID, 2015, 6 (06) :3104-3113