ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES

被引:0
|
作者
Teng, Da [1 ]
Song, Xiao m [1 ]
Gong, Guanghong [1 ]
Han, Liang [1 ]
机构
[1] Beihang Univ, Sch Automat, Beijing, Peoples R China
来源
INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE | 2017年 / 24卷 / 02期
关键词
machine learning; deep learning; neural networks; adversarial examples; COMMAND;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence areas, such as object recognition, speech recognition, and machine translation. While deep neural networks have high expression capabilities, they are prone to over fitting due to the high dimensionalities of the networks. In recent applications, deep neural networks have been found to be unstable in adversarial perturbations, which are small but can increase the network's prediction errors. This paper proposes a novel training algorithm to improve the robustness of the neural networks in adversarial examples.
引用
收藏
页码:123 / 133
页数:11
相关论文
共 50 条
  • [31] Effect of adversarial examples on the robustness of CAPTCHA
    Zhang, Yang
    Gao, Haichang
    Pei, Ge
    Kang, Shuai
    Zhou, Xin
    2018 INTERNATIONAL CONFERENCE ON CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY (CYBERC 2018), 2018, : 1 - 10
  • [32] On the robustness of randomized classifiers to adversarial examples
    Pinot, Rafael
    Meunier, Laurent
    Yger, Florian
    Gouy-Pailler, Cedric
    Chevaleyre, Yann
    Atif, Jamal
    MACHINE LEARNING, 2022, 111 (09) : 3425 - 3457
  • [33] Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications
    Ruan, Wenjie
    Yi, Xinping
    Huang, Xiaowei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4866 - 4869
  • [34] Analysing Adversarial Examples for Deep Learning
    Jung, Jason
    Akhtar, Naveed
    Hassan, Ghulam
    VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 5: VISAPP, 2021, : 585 - 592
  • [35] Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks
    Hitaj, Dorjan
    Hitaj, Briland
    Jajodia, Sushil
    Mancini, Luigi, V
    IEEE INTELLIGENT SYSTEMS, 2021, 36 (05) : 104 - 111
  • [36] Adversarial Examples Against Deep Neural Network based Steganalysis
    Zhang, Yiwei
    Zhang, Weiming
    Chen, Kejiang
    Liu, Jiayang
    Liu, Yujia
    Yu, Nenghai
    PROCEEDINGS OF THE 6TH ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY (IH&MMSEC'18), 2018, : 67 - 72
  • [37] Towards the Development of Robust Deep Neural Networks in Adversarial Settings
    Huster, Todd P.
    Chiang, Cho-Yu Jason
    Chadha, Ritu
    Swami, Ananthram
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 419 - 424
  • [38] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [39] AdvAttackVis: An Adversarial Attack Visualization System for Deep Neural Networks
    Ding Wei-jie
    Shen Xuchen
    Yuan Ying
    Mao Ting-yun
    Sun Guo-dao
    Chen Li-li
    Chen Bing-ting
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (05) : 383 - 391
  • [40] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161