ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES

被引:0
|
作者
Teng, Da [1 ]
Song, Xiao m [1 ]
Gong, Guanghong [1 ]
Han, Liang [1 ]
机构
[1] Beihang Univ, Sch Automat, Beijing, Peoples R China
关键词
machine learning; deep learning; neural networks; adversarial examples; COMMAND;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence areas, such as object recognition, speech recognition, and machine translation. While deep neural networks have high expression capabilities, they are prone to over fitting due to the high dimensionalities of the networks. In recent applications, deep neural networks have been found to be unstable in adversarial perturbations, which are small but can increase the network's prediction errors. This paper proposes a novel training algorithm to improve the robustness of the neural networks in adversarial examples.
引用
收藏
页码:123 / 133
页数:11
相关论文
共 50 条
  • [31] Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters
    Gu, Shuangchi
    Yi, Ping
    Zhu, Ting
    Yao, Yao
    Wang, Wei
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 164 - 173
  • [32] Natural Scene Statistics for Detecting Adversarial Examples in Deep Neural Networks
    Kherchouche, Anouar
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforges, Olivier
    2020 IEEE 22ND INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2020,
  • [33] Digital Watermark Perturbation for Adversarial Examples to Fool Deep Neural Networks
    Feng, Shiyu
    Feng, Feng
    Xu, Xiao
    Wang, Zheng
    Hu, Yining
    Xie, Lizhe
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [34] On the Robustness to Adversarial Examples of Neural ODE Image Classifiers
    Carrara, Fabio
    Caldelli, Roberto
    Falchi, Fabrizio
    Amato, Giuseppe
    2019 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2019,
  • [35] Analyzing the Robustness of Deep Learning Against Adversarial Examples
    Zhao, Jun
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064
  • [36] Improving adversarial robustness of deep neural networks via adaptive margin evolution
    Ma, Linhai
    Liang, Liang
    NEUROCOMPUTING, 2023, 551
  • [37] Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss
    Li, Pengcheng
    Yi, Jinfeng
    Zhou, Bowen
    Zhang, Lijun
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2909 - 2915
  • [38] Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients
    Ros, Andrew Slavin
    Doshi-Velez, Finale
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1660 - 1669
  • [39] GradFuzz: Fuzzing deep neural networks with gradient vector coverage for adversarial examples
    Park, Leo Hyun
    Chung, Soochang
    Kim, Jaeuk
    Kwon, Taekyoung
    NEUROCOMPUTING, 2023, 522 : 165 - 180
  • [40] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020