A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine Learning

被引:0
作者
Sadeghi, Koosha [1 ]
Banerjee, Ayan [1 ]
Gupta, Sandeep K. S. [1 ]
机构
[1] Arizona State Univ, CIDSE, IMPACT Lab, Tempe, AZ 85281 USA
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2020年 / 4卷 / 04期
关键词
Computational intelligence (CI); adversarial machine learning; supervised learning; attack model; defense model; DEEP NEURAL-NETWORKS; CONVEX-OPTIMIZATION; SECURITY ANALYSIS; ROBUSTNESS; CLASSIFIERS; PERFORMANCE;
D O I
10.1109/TETCI.2020.2968933
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine Learning (ML) algorithms, specifically supervised learning, are widely used in modern real-world applications, which utilize Computational Intelligence (CI) as their core technology, such as autonomous vehicles, assistive robots, and biometric systems. Attacks that cause misclassifications or mispredictions can lead to erroneous decisions resulting in unreliable operations. Designing robust ML with the ability to provide reliable results in the presence of such attacks has become a top priority in the field of adversarial machine learning. An essential characteristic for rapid development of robust ML is an arms race between attack and defense strategists. However, an important prerequisite for the arms race is access to a well-defined system model so that experiments can be repeated by independent researchers. This article proposes a fine-grained system-driven taxonomy to specify ML applications and adversarial system models in an unambiguous manner such that independent researchers can replicate experiments and escalate the arms race to develop more evolved and robust ML applications. The article provides taxonomies for: 1) the dataset, 2) the ML architecture, 3) the adversary's knowledge, capability, and goal, 4) adversary's strategy, and 5) the defense response. In addition, the relationships among these models and taxonomies are analyzed by proposing an adversarial machine learning cycle. The provided models and taxonomies are merged to form a comprehensive system-driven taxonomy, which represents the arms race between the ML applications and adversaries in recent years. The taxonomies encode best practices in the field and help evaluate and compare the contributions of research works and reveals gaps in the field.
引用
收藏
页码:450 / 467
页数:18
相关论文
共 267 条
[1]   On the Protection of Private Information in Machine Learning Systems: Two Recent Approches (Invited Paper) [J].
Abadi, Martin ;
Erlingsson, Ulfar ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Papernot, Nicolas ;
Talwar, Kunal ;
Zhang, Li .
2017 IEEE 30TH COMPUTER SECURITY FOUNDATIONS SYMPOSIUM (CSF), 2017, :1-6
[2]   Defense against Universal Adversarial Perturbations [J].
Akhtar, Naveed ;
Liu, Jian ;
Mian, Ajmal .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3389-3398
[3]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[4]   Adversarial Deep Learning for Robust Detection of Binary Encoded Malware [J].
Al-Dujaili, Abdullah ;
Huang, Alex ;
Hemberg, Erik ;
O'reilly, Una-May .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, :76-82
[5]  
Alabdulmohsin I. M., 2014, CIKM 14, P231, DOI DOI 10.1145/2661829.2662047
[6]  
Alfeld S, 2016, AAAI CONF ARTIF INTE, P1452
[7]   The state of Web security [J].
Andrews, Mike .
IEEE SECURITY & PRIVACY, 2006, 4 (04) :14-15
[8]  
Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
[9]  
[Anonymous], 2012, P 29 INT COFERENCE I
[10]  
[Anonymous], 2016, ICLR