ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES

被引:0
|
作者
Teng, Da [1 ]
Song, Xiao m [1 ]
Gong, Guanghong [1 ]
Han, Liang [1 ]
机构
[1] Beihang Univ, Sch Automat, Beijing, Peoples R China
关键词
machine learning; deep learning; neural networks; adversarial examples; COMMAND;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence areas, such as object recognition, speech recognition, and machine translation. While deep neural networks have high expression capabilities, they are prone to over fitting due to the high dimensionalities of the networks. In recent applications, deep neural networks have been found to be unstable in adversarial perturbations, which are small but can increase the network's prediction errors. This paper proposes a novel training algorithm to improve the robustness of the neural networks in adversarial examples.
引用
收藏
页码:123 / 133
页数:11
相关论文
共 50 条
  • [1] Robustness of deep neural networks in adversarial examples
    Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [2] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
  • [3] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    Machine Vision and Applications, 2024, 35
  • [4] Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual information
    Zhang J.
    Qian W.
    Cao J.
    Xu D.
    Neural Computing and Applications, 2024, 36 (23) : 14379 - 14394
  • [5] Adversarial Robustness Guarantees for Random Deep Neural Networks
    De Palma, Giacomo
    Kiani, Bobak T.
    Lloyd, Seth
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [6] Towards Proving the Adversarial Robustness of Deep Neural Networks
    Katz, Guy
    Barrett, Clark
    Dill, David L.
    Julian, Kyle
    Kochenderfer, Mykel J.
    ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2017, (257): : 19 - 26
  • [7] Interpretability Analysis of Deep Neural Networks With Adversarial Examples
    Dong Y.-P.
    Su H.
    Zhu J.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (01): : 75 - 86
  • [8] Compound adversarial examples in deep neural networks q
    Li, Yanchun
    Li, Zhetao
    Zeng, Li
    Long, Saiqin
    Huang, Feiran
    Ren, Kui
    INFORMATION SCIENCES, 2022, 613 : 50 - 68
  • [9] Assessing Threat of Adversarial Examples on Deep Neural Networks
    Graese, Abigail
    Rozsa, Andras
    Boult, Terrance E.
    2016 15TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2016), 2016, : 69 - 74
  • [10] A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
    Wang, Yang
    Dong, Bo
    Xu, Ke
    Piao, Haiyin
    Ding, Yufei
    Yin, Baocai
    Yang, Xin
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (05)