Defending ML-Based Feedback Loop System Against Malicious Adversarial Inference Attacks

被引:0
作者
Vahakainu, Petri [1 ]
Lehto, Martti [1 ]
Kariluoto, Antti [1 ]
机构
[1] Univ Jyvaskyla, Fac Informat Technol, Jyvaskyla, Finland
来源
PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON CYBER WARFARE AND SECURITY (ICCWS 2021) | 2021年
关键词
adversarial inference attacks; adversarial machine learning; cyber-physical system; cybersecurity; defense mechanisms; ML-utilized feedback loop system;
D O I
10.34190/IWS.21.045
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial Intelligence (AI) and its subset Machine Learning (ML) have been developing significantly in recent years. These technologies provide means to design and implement novel and smart applications utilizing data to be collected and processed locally or at the cloud. The data gathered can be used as a feed for training ML models able to generate predictions to be applied in fields such as construction, healthcare, military, or transportation. The hypothesis is that ML environment is not malicious, but benign. In the real world, perpetrators may try to maliciously alter the testing dataset by initiating oracle inference attacks. Oracle attacks can be categorized as ML model extraction, model inversion, or membership inference attacks. A perpetrator may chain the attacks and conduct a model extraction attack prior to the model inversion attack in order to restore ML model parameters. After extraction of the model, the perpetrator can then conduct a model inversion attack to learn the training dataset and apply evasion attack to learn a similar model. These kinds of attacks can pose a significant threat to ML utilized predictive cyber-physical system (CPS) adjusting heating, ventilation, and air conditioning (HVAC) or deceive facial recognition to gain access in facilities from smart offices to airport and data centers. In this article, we examine the concepts of adversarial machine learning (AML) and ML, AI and cybersecurity, and conduct a literature review on relevant malicious inference attack vectors. These vectors can be used in applying attacks towards ML-based smart building feedback loop systems in the cyber-physical system context. Corresponding countermeasures to prevent these attacks are examined accordingly.
引用
收藏
页码:382 / 390
页数:9
相关论文
共 27 条
[1]  
Lipton ZC, 2015, Arxiv, DOI arXiv:1506.00019
[2]  
Dewar RS, 2014, INT CONF CYBER CONFL, P7, DOI 10.1109/CYCON.2014.6916392
[3]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[4]  
GTSC, 2019, JORDANIAN CITIZEN CE
[5]  
Atli BG, 2020, Arxiv, DOI arXiv:1910.05429
[6]  
Jia JY, 2019, Arxiv, DOI arXiv:1909.10594
[7]   Machine learning: Trends, perspectives, and prospects [J].
Jordan, M. I. ;
Mitchell, T. M. .
SCIENCE, 2015, 349 (6245) :255-260
[8]  
Juuti M, 2019, Arxiv, DOI arXiv:1805.02628
[9]  
Khrisha K., 2020, STEAL MODERN NLP SYS
[10]  
Lee TS, 2018, Arxiv, DOI arXiv:1806.00054