ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES

被引:0
|
作者
Teng, Da [1 ]
Song, Xiao m [1 ]
Gong, Guanghong [1 ]
Han, Liang [1 ]
机构
[1] Beihang Univ, Sch Automat, Beijing, Peoples R China
来源
INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE | 2017年 / 24卷 / 02期
关键词
machine learning; deep learning; neural networks; adversarial examples; COMMAND;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence areas, such as object recognition, speech recognition, and machine translation. While deep neural networks have high expression capabilities, they are prone to over fitting due to the high dimensionalities of the networks. In recent applications, deep neural networks have been found to be unstable in adversarial perturbations, which are small but can increase the network's prediction errors. This paper proposes a novel training algorithm to improve the robustness of the neural networks in adversarial examples.
引用
收藏
页码:123 / 133
页数:11
相关论文
共 50 条
  • [41] Training Neural Networks with Random Noise Images for Adversarial Robustness
    Park, Ji-Young
    Liu, Lin
    Li, Jiuyong
    Liu, Jixue
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3358 - 3362
  • [42] Adversarial Evasion Attacks to Deep Neural Networks in ECR Models
    Nemoto, Shota
    Rajapaksha, Subhash
    Perouli, Despoina
    HEALTHINF: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES - VOL 5: HEALTHINF, 2021, : 135 - 141
  • [43] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141
  • [44] Enhancing Adversarial Examples on Deep Q Networks with Previous Information
    Sooksatra, Korn
    Rivas, Pablo
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [45] Examining the Proximity of Adversarial Examples to Class Manifolds in Deep Networks
    Pocos, Stefan
    Beckova, Iveta
    Farkas, Igor
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 645 - 656
  • [46] Explaining Adversarial Examples by Local Properties of Convolutional Neural Networks
    Aghdam, Hamed H.
    Heravi, Elnaz J.
    Puig, Domenec
    PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 5, 2017, : 226 - 234
  • [47] Simplicial-Map Neural Networks Robust to Adversarial Examples
    Paluzo-Hidalgo, Eduardo
    Gonzalez-Diaz, Rocio
    Gutierrez-Naranjo, Miguel A.
    Heras, Jonathan
    MATHEMATICS, 2021, 9 (02) : 1 - 16
  • [48] Disrupting adversarial transferability in deep neural networks
    Wiedeman, Christopher
    Wang, Ge
    PATTERNS, 2022, 3 (05):
  • [49] Analyzing the Noise Robustness of Deep Neural Networks
    Liu, Mengchen
    Liu, Shixia
    Su, Hang
    Cao, Kelei
    Zhu, Jun
    2018 IEEE CONFERENCE ON VISUAL ANALYTICS SCIENCE AND TECHNOLOGY (VAST), 2018, : 60 - 71
  • [50] SIT: Stochastic Input Transformation to Defend Against Adversarial Attacks on Deep Neural Networks
    Guesmi, Amira
    Alouani, Ihsen
    Baklouti, Mouna
    Frikha, Tarek
    Abid, Mohamed
    IEEE DESIGN & TEST, 2022, 39 (03) : 63 - 72