Testing and Verification of the Deep Neural Networks Against Sparse Pixel Defects

被引:0
作者
Szczepankiewicz, Michal [1 ]
Radlak, Krystian [2 ,3 ]
Szczepankiewicz, Karolina [4 ]
Popowicz, Adam [4 ]
Zawistowski, Pawel [2 ]
机构
[1] NVIDIA, Warsaw, Poland
[2] Warsaw Univ Technol, Warsaw, Poland
[3] Vay Technol, Berlin, Germany
[4] Silesian Tech Univ, Gliwice, Poland
来源
COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2022 WORKSHOPS | 2022年 / 13415卷
关键词
Dependability; Adversarial attacks; Deep learning; Evolution algorithms; Differential evolution;
D O I
10.1007/978-3-031-14862-0_4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks can produce outstanding results when applied to image recognition tasks but are susceptible to image defects and modifications. Substantial degradation of the image can be detected by automatic or interactive prevention techniques. However, sparse pixel defects may have a significant impact on the dependability of safety-critical systems, especially autonomous driving vehicles. Such perturbations can limit the perception capabilities of the system while remaining undetected by human observer. The effective generation of such cases facilitates the simulation of real-life challenges caused by sparse pixel defects, like occluded or stained objects. This work introduces a novel sparse adversarial attack generation method based on differential evolution strategy. Additionally, we introduce a novel framework for sparse adversarial attack generation, which can be integrated into the safety-critical systems development process. An empirical evaluation demonstrates that the proposed method outperforms and complements state-of-the-art techniques allowing for complete evaluation of an image recognition system.
引用
收藏
页码:71 / 82
页数:12
相关论文
共 30 条
[1]   A State-of-the-Art Survey on Deep Learning Theory and Architectures [J].
Alom, Md Zahangir ;
Taha, Tarek M. ;
Yakopcic, Chris ;
Westberg, Stefan ;
Sidike, Paheding ;
Nasrin, Mst Shamima ;
Hasan, Mahmudul ;
Van Essen, Brian C. ;
Awwal, Abdul A. S. ;
Asari, Vijayan K. .
ELECTRONICS, 2019, 8 (03)
[2]   Toward a Matrix-Free Covariance Matrix Adaptation Evolution Strategy [J].
Arabas, Jarosiaw ;
Jagodzinski, Dariusz .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2020, 24 (01) :84-98
[3]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[4]  
Carrère JP, 2014, INT RELIAB PHY SYM
[5]   Predicting pixel defect rates based on image sensor parameters [J].
Chapman, Glenn H. ;
Leung, Jenny ;
Namburete, Ana ;
Koren, Israel ;
Koren, Zahava .
2011 IEEE INTERNATIONAL SYMPOSIUM ON DEFECT AND FAULT TOLERANCE IN VLSI AND NANOTECHNOLOGY SYSTEMS (DFT), 2011, :408-416
[6]  
Cheng CH, 2019, ICCAD-IEEE ACM INT, DOI [10.1109/iccad45719.2019.8942153, 10.1109/ECTI-NCON.2019.8692298]
[7]  
Croce F, 2022, Arxiv, DOI arXiv:2006.12834
[8]   Sparse and Imperceivable Adversarial Attacks [J].
Croce, Francesco ;
Hein, Matthias .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4723-4731
[9]   Robust Physical-World Attacks on Deep Learning Visual Classification [J].
Eykholt, Kevin ;
Evtimov, Ivan ;
Fernandes, Earlence ;
Li, Bo ;
Rahmati, Amir ;
Xiao, Chaowei ;
Prakash, Atul ;
Kohno, Tadayoshi ;
Song, Dawn .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1625-1634
[10]  
Goodfellow I., 2015, PROC INT C LEARN REP