ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES

被引:0
|
作者
Teng, Da [1 ]
Song, Xiao m [1 ]
Gong, Guanghong [1 ]
Han, Liang [1 ]
机构
[1] Beihang Univ, Sch Automat, Beijing, Peoples R China
来源
INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE | 2017年 / 24卷 / 02期
关键词
machine learning; deep learning; neural networks; adversarial examples; COMMAND;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence areas, such as object recognition, speech recognition, and machine translation. While deep neural networks have high expression capabilities, they are prone to over fitting due to the high dimensionalities of the networks. In recent applications, deep neural networks have been found to be unstable in adversarial perturbations, which are small but can increase the network's prediction errors. This paper proposes a novel training algorithm to improve the robustness of the neural networks in adversarial examples.
引用
收藏
页码:123 / 133
页数:11
相关论文
共 50 条
  • [21] A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness
    Trusov, A. V.
    Limonova, E. E.
    Arlazarov, V. V.
    COMPUTER OPTICS, 2025, 49 (02) : 222 - 252
  • [22] Robustness to adversarial examples can be improved with overfitting
    Deniz, Oscar
    Pedraza, Anibal
    Vallez, Noelia
    Salido, Jesus
    Bueno, Gloria
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2020, 11 (04) : 935 - 944
  • [23] Robustness to adversarial examples can be improved with overfitting
    Oscar Deniz
    Anibal Pedraza
    Noelia Vallez
    Jesus Salido
    Gloria Bueno
    International Journal of Machine Learning and Cybernetics, 2020, 11 : 935 - 944
  • [24] Deep Networks with RBF Layers to Prevent Adversarial Examples
    Vidnerova, Petra
    Neruda, Roman
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2018, PT I, 2018, 10841 : 257 - 266
  • [25] Generalizing universal adversarial perturbations for deep neural networks
    Yanghao Zhang
    Wenjie Ruan
    Fu Wang
    Xiaowei Huang
    Machine Learning, 2023, 112 : 1597 - 1626
  • [26] Generalizing universal adversarial perturbations for deep neural networks
    Zhang, Yanghao
    Ruan, Wenjie
    Wang, Fu
    Huang, Xiaowei
    MACHINE LEARNING, 2023, 112 (05) : 1597 - 1626
  • [27] ε-Weakened Robustness of Deep Neural Networks
    Huang, Pei
    Yang, Yuting
    Liu, Minghao
    Jia, Fuqi
    Ma, Feifei
    Zhang, Jian
    PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022, 2022, : 126 - 138
  • [28] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [29] Effect of adversarial examples on the robustness of CAPTCHA
    Zhang, Yang
    Gao, Haichang
    Pei, Ge
    Kang, Shuai
    Zhou, Xin
    2018 INTERNATIONAL CONFERENCE ON CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY (CYBERC 2018), 2018, : 1 - 10
  • [30] Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications
    Ruan, Wenjie
    Yi, Xinping
    Huang, Xiaowei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4866 - 4869