Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks

被引:0
|
作者
Ayaz, Ferheen [3 ]
Zakariyya, Idris [1 ]
Cano, José [1 ]
Keoh, Sye Loong [1 ]
Singer, Jeremy [1 ]
Pau, Danilo [2 ]
Kharbouche-Harrari, Mounia [2 ]
机构
[1] University of Glasgow, United Kingdom
[2] STMicroelectronics, Switzerland
[3] University of Sussex, United Kingdom
来源
arXiv | 2023年
关键词
Engineering Village;
D O I
暂无
中图分类号
学科分类号
摘要
Adversarial attack - Black boxes - Co-optimization - Deep neural network - Jacobian regularization - Jacobians - Neural network model - Qkera - Regularisation - White box
引用
收藏
相关论文
共 50 条
  • [1] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, Jose
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [2] Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
    Duncan, Kirsty
    Komendantskaya, Ekaterina
    Stewart, Robert
    Lones, Michael
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] Improving the adversarial robustness of quantized neural networks via exploiting the feature diversity
    Chu, Tianshu
    Fang, Kun
    Yang, Jie
    Huang, Xiaolin
    PATTERN RECOGNITION LETTERS, 2023, 176 : 117 - 122
  • [4] Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity
    Aquino, Bernardo
    Rahnama, Arash
    Seiler, Peter
    Lin, Lizhen
    Gupta, Vijay
    IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 2341 - 2346
  • [5] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [6] On the Robustness of Bayesian Neural Networks to Adversarial Attacks
    Bortolussi, Luca
    Carbone, Ginevra
    Laurenti, Luca
    Patane, Andrea
    Sanguinetti, Guido
    Wicker, Matthew
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [7] ROBUSTNESS-AWARE FILTER PRUNING FOR ROBUST NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Lim, Hyuntak
    Roh, Si-Dong
    Park, Sangki
    Chung, Ki-Seok
    2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [8] An orthogonal classifier for improving the adversarial robustness of neural networks
    Xu, Cong
    Li, Xiang
    Yang, Min
    INFORMATION SCIENCES, 2022, 591 : 251 - 262
  • [9] Improving Robustness of Facial Landmark Detection by Defending against Adversarial Attacks
    Zhu, Congcong
    Li, Xiaoqiang
    Li, Jide
    Dai, Songmin
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 11731 - 11740
  • [10] Bringing robustness against adversarial attacks
    Gean T. Pereira
    André C. P. L. F. de Carvalho
    Nature Machine Intelligence, 2019, 1 : 499 - 500