Machine learning in adversarial environments

被引:52
作者
Laskov, Pavel [1 ]
Lippmann, Richard [2 ]
机构
[1] Univ Tubingen, Wilhelm Schickard Inst Comp Sci, D-72070 Tubingen, Germany
[2] MIT, Lincoln Lab, Lexington, MA 02173 USA
关键词
Adversarial learning; Adversary; Spam; Intrusion detection; Web spam; Robust classifier; Feature deletion; Arms race; Game theory;
D O I
10.1007/s10994-010-5207-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Whenever machine learning is used to prevent illegal or unsanctioned activity and there is an economic incentive, adversaries will attempt to circumvent the protection provided. Constraints on how adversaries can manipulate training and test data for classifiers used to detect suspicious behavior make problems in this area tractable and interesting. This special issue highlights papers that span many disciplines including email spam detection, computer intrusion detection, and detection of web pages deliberately designed to manipulate the priorities of pages returned by modern search engines. The four papers in this special issue provide a standard taxonomy of the types of attacks that can be expected in an adversarial framework, demonstrate how to design classifiers that are robust to deleted or corrupted features, demonstrate the ability of modern polymorphic engines to rewrite malware so it evades detection by current intrusion detection and antivirus systems, and provide approaches to detect web pages designed to manipulate web page scores returned by search engines. We hope that these papers and this special issue encourages the multidisciplinary cooperation required to address many interesting problems in this relatively new area including predicting the future of the arms races created by adversarial learning, developing effective long-term defensive strategies, and creating algorithms that can process the massive amounts of training and test data available for internet-scale problems.
引用
收藏
页码:115 / 119
页数:5
相关论文
共 27 条
[1]   Graph regularization methods for Web spam detection [J].
Abernethy, Jacob ;
Chapelle, Olivier ;
Castillo, Carlos .
MACHINE LEARNING, 2010, 81 (02) :207-225
[2]  
[Anonymous], C EM ANT
[3]  
[Anonymous], 2008, Advances in Neural Information Processing Systems
[4]  
[Anonymous], C EM ANT
[5]   On-line learning with malicious noise and the closure algorithm [J].
Auer, P ;
Cesa-Bianchi, N .
ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE, 1998, 23 (1-2) :83-99
[6]  
BARRENO M, 2010, MACH LEARN, V81, DOI DOI 10.1007/SI0994-010-5188-5
[7]  
Br_uckner M., 2009, Advances in neural information processing systems, P171
[8]   Industrial level sensing with radar [J].
Brumbi, D .
FREQUENZ, 2006, 60 (1-2) :2-5
[9]   PAC learning with nasty noise [J].
Bshouty, NH ;
Eiron, N ;
Kushilevitz, E .
THEORETICAL COMPUTER SCIENCE, 2002, 288 (02) :255-275
[10]   Casting out demons: Sanitizing training data for anomaly sensors [J].
Cretu, Gabriela F. ;
Stavrou, Angelos ;
Locasto, Michael E. ;
Stolfo, Salvatore J. .
PROCEEDINGS OF THE 2008 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, 2008, :81-+