A black-Box adversarial attack for poisoning clustering

被引:25
作者
Cina, Antonio Emanuele [1 ]
Torcinovich, Alessandro [1 ]
Pelillo, Marcello [1 ]
机构
[1] Ca Foscari Univ Venice, Venice, Italy
关键词
Adversarial learning; Unsupervised learning; Clustering; Robustness evaluation; Machine learning security; TUTORIAL;
D O I
10.1016/j.patcog.2021.108306
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Clustering algorithms play a fundamental role as tools in decision-making and sensible automation pro-cesses. Due to the widespread use of these applications, a robustness analysis of this family of algorithms against adversarial noise has become imperative. To the best of our knowledge, however, only a few works have currently addressed this problem. In an attempt to fill this gap, in this work, we propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algo-rithms. We formulate the problem as a constrained minimization program, general in its structure and customizable by the attacker according to her capability constraints. We do not assume any information about the internal structure of the victim clustering algorithm, and we allow the attacker to query it as a service only. In the absence of any derivative information, we perform the optimization with a custom approach inspired by the Abstract Genetic Algorithm (AGA). In the experimental part, we demonstrate the sensibility of different single and ensemble clustering algorithms against our crafted adversarial samples on different scenarios. Furthermore, we perform a comparison of our algorithm with a state-of-the-art approach showing that we are able to reach or even outperform its performance. Finally, to highlight the general nature of the generated noise, we show that our attacks are transferable even against supervised algorithms such as SVMs, random forests and neural networks. (c) 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 63 条
  • [1] Alex K., 2009, LEARNING MULTIPLE LA
  • [2] Alpaydin E., 1995, OPTICAL RECOGNITION
  • [3] GenAttack: Practical Black-box Attacks with Gradient-Free Optimization
    Alzantot, Moustafa
    Sharma, Yash
    Chakraborty, Supriyo
    Zhang, Huan
    Hsieh, Cho-Jui
    Srivastava, Mani B.
    [J]. PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'19), 2019, : 1111 - 1119
  • [4] [Anonymous], 2005, CEAS
  • [5] [Anonymous], 2012, Matrix Analysis
  • [6] Arthur D, 2007, PROCEEDINGS OF THE EIGHTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, P1027
  • [7] A survey of image spamming and filtering techniques
    Attar, Abdolrahman
    Rad, Reza Moradi
    Atani, Reza Ebrahimi
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2013, 40 (01) : 71 - 105
  • [8] Biggio B., 2013, P 2013 ACM WORKSHOP, P87
  • [9] Wild patterns: Ten years after the rise of adversarial machine learning
    Biggio, Battista
    Roli, Fabio
    [J]. PATTERN RECOGNITION, 2018, 84 : 317 - 331
  • [10] Biggio B, 2014, LECT NOTES COMPUT SC, V8621, P42, DOI 10.1007/978-3-662-44415-3_5