Defending against sparse adversarial attacks using impulsive noise reduction filters

被引:1
作者
Radlak, Krystian [1 ,2 ,3 ]
Szczepankiewicz, Michal [3 ]
Smolka, Bogdan [1 ]
机构
[1] Silesian Tech Univ, Akad 16, PL-44100 Gliwice, Poland
[2] Warsaw Univ Technol, Nowowiejska 15-19, PL-00665 Warsaw, Poland
[3] Exida, Warsaw, Poland
来源
REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2021 | 2021年 / 11736卷
关键词
adversarial attacks; sparse adversarial attacks; deep learning; neural networks; impulsive noise; image denoising; evolutionary algorithms; safety; REMOVAL;
D O I
10.1117/12.2587999
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) have been deployed in many real-world applications in various domains, both industry and academic, and have proven to deliver outstanding performance. However, DNNs are vulnerable to adversarial attacks, that are small perturbations embedded in an image. As a result, introduction of DNNs into safety-critical systems, such as autonomous vehicles, unmanned aerial vehicles or healthcare devices, would introduce very high risk of limiting their capabilities to recognize and interpret the environment in which they are used and therefore would lead to devastating consequences. Thus, robustness enhancement of DNNs by development of defense mechanisms is a matter of the utmost importance. In this paper, we evaluated a set of state-of-the-art denoising filters designed for impulsive noise removal as defensive solutions. The proposed methods are applied as a pre-processing step, in which the adversarial patterns in the source image are removed before performing classification task. As a result, the pre-processing defense block can be easily integrated with any type of classifier, without any knowledge about utilized training procedures or internal architecture of the model. Moreover, the evaluated filtering methods can be considered as universal defensive techniques, as they are completely unrelated with the internal aspects of the selected attack and can be applied against any type of adversarial threats. The experimental results obtained on German Traffic Sign Recognition Benchmark (GTSRB) have proven that the denoising filters provide high robustness against sparse adversarial attacks and do not significantly decrease the classification performance on non-altered data.
引用
收藏
页数:8
相关论文
共 48 条
  • [1] Survey on Deep Neural Networks in Speech and Vision Systems
    Alam, M.
    Samad, M. D.
    Vidyaratne, L.
    Glandon, A.
    Iftekharuddin, K. M.
    [J]. NEUROCOMPUTING, 2020, 417 : 302 - 321
  • [2] [Anonymous], 2018, P INT C LEARN REP
  • [3] [Anonymous], 2004, Nonlinear Signal and Image Processing: Theory, Methods, and Applications
  • [4] [Anonymous], 2018, 6 INT C LEARNING REP
  • [5] VECTOR MEDIAN FILTERS
    ASTOLA, J
    HAAVISTO, P
    NEUVO, Y
    [J]. PROCEEDINGS OF THE IEEE, 1990, 78 (04) : 678 - 689
  • [6] Nonlinear vector filtering for impulsive noise removal from color images
    Celebi, M. Emre
    Kingravi, Hassan A.
    Aslandogan, Y. Alp
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2007, 16 (03)
  • [7] Cheng CH, 2019, ICCAD-IEEE ACM INT, DOI [10.1109/ECTI-NCON.2019.8692298, 10.1109/iccad45719.2019.8942153]
  • [8] Sparse and Imperceivable Adversarial Attacks
    Croce, Francesco
    Hein, Matthias
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4723 - 4731
  • [9] Das N., 2017, CORR
  • [10] Etmann C., 2019, INT C MACH LEARN ICM, P1823