ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES

被引:0
|
作者
Teng, Da [1 ]
Song, Xiao m [1 ]
Gong, Guanghong [1 ]
Han, Liang [1 ]
机构
[1] Beihang Univ, Sch Automat, Beijing, Peoples R China
关键词
machine learning; deep learning; neural networks; adversarial examples; COMMAND;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence areas, such as object recognition, speech recognition, and machine translation. While deep neural networks have high expression capabilities, they are prone to over fitting due to the high dimensionalities of the networks. In recent applications, deep neural networks have been found to be unstable in adversarial perturbations, which are small but can increase the network's prediction errors. This paper proposes a novel training algorithm to improve the robustness of the neural networks in adversarial examples.
引用
收藏
页码:123 / 133
页数:11
相关论文
共 50 条
  • [41] Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
    Liang, Bin
    Li, Hongcheng
    Su, Miaoqiang
    Li, Xirong
    Shi, Wenchang
    Wang, Xiaofeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (01) : 72 - 85
  • [42] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    EURASIP Journal on Information Security, 2020
  • [43] Adversarial Examples in RF Deep Learning: Detection and Physical Robustness
    Kokalj-Filipovic, Silvija
    Miller, Rob
    Vanhoy, Garrett
    2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [44] Deep neural rejection against adversarial examples
    Sotgiu, Angelo
    Demontis, Ambra
    Melis, Marco
    Biggio, Battista
    Fumera, Giorgio
    Feng, Xiaoyi
    Roli, Fabio
    EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)
  • [45] Adversarial Robustness Certification for Bayesian Neural Networks
    Wicker, Matthew
    Platzer, Andre
    Laurenti, Luca
    Kwiatkowska, Marta
    FORMAL METHODS, PT I, FM 2024, 2025, 14933 : 3 - 28
  • [46] On the Robustness of Bayesian Neural Networks to Adversarial Attacks
    Bortolussi, Luca
    Carbone, Ginevra
    Laurenti, Luca
    Patane, Andrea
    Sanguinetti, Guido
    Wicker, Matthew
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [47] REINFORCING THE ROBUSTNESS OF A DEEP NEURAL NETWORK TO ADVERSARIAL EXAMPLES BY USING COLOR QUANTIZATION OF TRAINING IMAGE DATA
    Miyazato, Shuntaro
    Wang, Xueting
    Yamasaki, Toshihiko
    Aizawa, Kiyoharu
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 884 - 888
  • [48] A regularization perspective based theoretical analysis for adversarial robustness of deep spiking neural networks
    Zhang, Hui
    Cheng, Jian
    Zhang, Jun
    Liu, Hongyi
    Wei, Zhihui
    NEURAL NETWORKS, 2023, 165 : 164 - 174
  • [49] Adversarial robustness in deep neural networks based on variable attributes of the stochastic ensemble model
    Qin, Ruoxi
    Wang, Linyuan
    Du, Xuehui
    Xie, Pengfei
    Chen, Xingyuan
    Yan, Bin
    FRONTIERS IN NEUROROBOTICS, 2023, 17
  • [50] A concealed poisoning attack to reduce deep neural networks' robustness against adversarial samples
    Zheng, Junhao
    Chan, Patrick P. K.
    Chi, Huiyang
    He, Zhimin
    INFORMATION SCIENCES, 2022, 615 : 758 - 773