Extensions and Detailed Analysis of Synergy Between Traditional Classification and Classification Based on Negative Features in Deep Convolutional Neural Networks

被引:0
作者
Rackovic, Milos [1 ]
Vidakovic, Jovana [1 ]
Milosevic, Nemanja [1 ]
机构
[1] Univ Novi Sad, Fac Sci, Dept Math & Informat, Trg Dositeja Obradovica 4, Novi Sad 21000, Serbia
关键词
Machine learning; Deep convolutional neural networks; Image classification; Machine learning robustness; Adversarial attacks;
D O I
10.1007/s12559-024-10369-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent times, deep convolutional neural networks became an irreplaceable tool for pattern recognition in many different machine learning applications, especially in image classification. On the other hand, these models are often used in critical systems which are the reason for new and recent research regarding their robustness and reliability. One of the most important issues for these models is their susceptibility to different adversarial attacks. In our previous work Milo & scaron;evi & cacute; and Rackovi & cacute; (Neural Network World. 2019;29(4):221-34), and Milo & scaron;evi & cacute; and Rackovi & cacute; (Neural Comput Applic. 2021;33:7593-602), the new type of learning applicable to all the convolutional neural networks was introduced: the classification based on the negative features and the synergy of traditional and those newly introduced network models. In the case of partial inputs/image occlusion, it was shown that our new method creates models that are more robust and perform better when compared to traditional models of the same architecture. In this paper, some extensions of the earlier proposed synergy are given by introducing negatively trained features and additional synergy between four independent neural network models. A detailed analysis of the robustness of the newly proposed model is performed on EMNIST and CIFAR-10 image classification data sets in the case of the selected input occlusions and adversarial attacks. The newly proposed neural network architecture improves the robustness of the neural network and increases its resistance to various types of input damage and adversarial attacks.
引用
收藏
页数:16
相关论文
共 32 条
[1]  
Bastani O, 2016, ADV NEUR IN, V29
[2]   Improving optimization of convolutional neural networks through parameter fine-tuning [J].
Becherer, Nicholas ;
Pecarina, John ;
Nykl, Scott ;
Hopkinson, Kenneth .
NEURAL COMPUTING & APPLICATIONS, 2019, 31 (08) :3469-3479
[3]  
Carlini N, 2018, ICLR 2018 C, DOI [10.48550/arXiv.1709.10207, DOI 10.48550/ARXIV.1709.10207]
[4]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[5]  
Cohen G, 2017, Arxiv, DOI arXiv:1702.05373
[6]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[7]   Hessian with Mini-Batches for Electrical Demand Prediction [J].
Elias, Israel ;
de Jesus Rubio, Jose ;
Ricardo Cruz, David ;
Ochoa, Genaro ;
Francisco Novoa, Juan ;
Ivan Martinez, Dany ;
Muniz, Samantha ;
Balcazar, Ricardo ;
Garcia, Enrique ;
Felipe Juarez, Cesar .
APPLIED SCIENCES-BASEL, 2020, 10 (06)
[8]   Multi-Cue Pedestrian Classification With Partial Occlusion Handling [J].
Enzweiler, Markus ;
Eigenstetter, Angela ;
Schiele, Bernt ;
Gavrila, Dariu M. .
2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, :990-997
[9]  
Hashemi AS, 2020, Arxiv, DOI [arXiv:2010.14919, DOI 10.48550/ARXIV.2010.14919]
[10]   Secure deep neural networks using adversarial image generation and training with Noise-GAN [J].
Hashemi, Atiye Sadat ;
Mozaffari, Saeed .
COMPUTERS & SECURITY, 2019, 86 :372-387