DCAL: A New Method for Defending Against Adversarial Examples

被引:0
作者
Lin, Xiaoyu [1 ,4 ]
Cao, Chunjie [2 ,4 ]
Wang, Longjuan [2 ,4 ]
Liu, Zhiyuan [2 ,4 ]
Li, Mengqian [2 ,4 ]
Ma, Haiying [3 ]
机构
[1] Hainan Univ, Sch Comp Sci & Technol, Haikou 570228, Hainan, Peoples R China
[2] Hainan Univ, Sch Cryptol, Sch Cyberspace Secur, Haikou 570228, Hainan, Peoples R China
[3] Univ Manchester, Sch Comp Sci, Manchester M13 9PL, Lancs, England
[4] Hainan Univ, Key Lab Internet Informat Retrieval Hainan Prov, Haikou 570228, Hainan, Peoples R China
来源
ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT II | 2022年 / 13339卷
基金
中国国家自然科学基金;
关键词
Adversarial examples; Adversarial attacks; Defense;
D O I
10.1007/978-3-031-06788-4_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, deep learning has shown excellent performance in the field of computer vision. Nevertheless, researchers have found that the deep learning system does not have good robustness. Adding a insignificant amount of undetectable interference to the input of the deep learning system can lead to deep learning models fail, and these examples that make the model fail are called adversarial examples by researchers. The existence of adversarial examples will hinder the application and popularization of artificial intelligence-based deep learning systems. Therefore, we propose a denoising convolutional autoencoder incorporated with label knowledge (DCAL), a new method for defending against adversarial examples. The principle of which is DCAL as a pre-processing module before image classification, the image to be classified is denoised and reconstructed to obtain a innovative image, which is then sent to the classifier for classification. If we let the innovative image obtained by the adversarial examples through DCAL can make the classifier classify correctly, we will achieve the role of defending against the adversarial examples. The experimental results on two benchmark datasets including MNIST, CIFAR-10. Our experimental principally resisting the white-box attacks. The experimental results show that the proposed DCAL is superior to state-of-the-art defense methods in a white-box setting.
引用
收藏
页码:38 / 50
页数:13
相关论文
共 24 条
  • [1] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [2] Boosting Adversarial Attacks with Momentum
    Dong, Yinpeng
    Liao, Fangzhou
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    Hu, Xiaolin
    Li, Jianguo
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 9185 - 9193
  • [3] Robust Physical-World Attacks on Deep Learning Visual Classification
    Eykholt, Kevin
    Evtimov, Ivan
    Fernandes, Earlence
    Li, Bo
    Rahmati, Amir
    Xiao, Chaowei
    Prakash, Atul
    Kohno, Tadayoshi
    Song, Dawn
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1625 - 1634
  • [4] Ganin Y, 2016, J MACH LEARN RES, V17
  • [5] Grosse K, 2017, Arxiv, DOI arXiv:1702.06280
  • [6] Hosseini H, 2017, Arxiv, DOI arXiv:1703.04318
  • [7] Deep Learning in DXA Image Segmentation
    Hussain, Dildar
    Naqyi, Rizwan Ali
    Loh, Woong-Kee
    Lee, Jooyoung
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 66 (03): : 2587 - 2598
  • [8] Goodfellow IJ, 2015, Arxiv, DOI [arXiv:1412.6572, 10.48550/arXiv.1412.6572]
  • [9] Automated Identification Algorithm Using CNN for Computer Vision in Smart Refrigerators
    Jain, Pulkit
    Chawla, Paras
    Masud, Mehedi
    Mahajan, Shubham
    Pandit, Amit Kant
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (02): : 3337 - 3353
  • [10] Krizhevsky A., 2009, CITESEER