FPGA Adaptive Neural Network Quantization for Adversarial Image Attack Defense

被引:0
|
作者
Lu, Yufeng [1 ,2 ]
Shi, Xiaokang [2 ,3 ]
Jiang, Jianan [1 ,4 ]
Deng, Hanhui [1 ,4 ]
Wang, Yanwen [2 ,3 ]
Lu, Jiwu [1 ,2 ]
Wu, Di [1 ,4 ]
机构
[1] Hunan Univ, Natl Engn Res Ctr Robot Visual Percept & Control T, Changsha 410082, Hunan, Peoples R China
[2] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Hunan, Peoples R China
[3] Hunan Univ, Shenzhen Res Inst, Shenzhen 518000, Peoples R China
[4] Hunan Univ, Sch Robot, Changsha 410082, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Field programmable gate arrays; Quantization (signal); Computational modeling; Training; Robustness; Neural networks; Real-time systems; Adversarial attack; field-programmable gate array (FPGA); quantized neural networks (QNNs);
D O I
10.1109/TII.2024.3438284
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Quantized neural networks (QNNs) have become a standard operation for efficiently deploying deep learning models on hardware platforms in real application scenarios. An empirical study on German traffic sign recognition benchmark (GTSRB) dataset shows that under the three white-box adversarial attacks of fast gradient sign method, random + fast gradient sign method and basic iterative method, the accuracy of the full quantization model was only 55%, much lower than that of the full precision model (73%). This indicates the adversarial robustness of the full quantization model is much worse than that of the full precision model. To improve the adversarial robustness of the full quantization model, we have designed an adversarial attack defense platform based on field-programmable gate array (FPGA) to jointly optimize the efficiency and robustness of QNNs. Various hardware-friendly techniques such as adversarial training and feature squeezing were studied and transferred to the FPGA platform based on the designed accelerator of QNN. Experiments on the GTSRB dataset show that the adversarial training embedded on FPGA can increase the model's average accuracy by 2.5% on clean data, 15% under white-box attacks, and 4% under black-box attacks, respectively, demonstrating our methodology can improve the robustness of the full quantization model under different adversarial attacks.
引用
收藏
页码:14017 / 14028
页数:12
相关论文
共 50 条
  • [1] Adversarial Attack on Deep Product Quantization Network for Image Retrieval
    Feng, Yan
    Chen, Bin
    Dai, Tao
    Xia, Shu-Tao
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 10786 - 10793
  • [2] Adversarial attack defense algorithm based on convolutional neural network
    Zhang, Chengyuan
    Wang, Ping
    NEURAL COMPUTING & APPLICATIONS, 2023, 36 (17): : 9723 - 9735
  • [3] Adversarial Attack Defense Based on the Deep Image Prior Network
    Sutanto, Richard Evan
    Lee, Sukho
    INFORMATION SCIENCE AND APPLICATIONS, 2020, 621 : 519 - 526
  • [4] Similarity-based optimised and adaptive adversarial attack on image classification using neural network
    Chelliah, Balika J.
    Malik, Mohammad Mustafa
    Kumar, Ashwin
    Singh, Nitin
    Regin, R.
    INTERNATIONAL JOURNAL OF INTELLIGENT ENGINEERING INFORMATICS, 2023, 11 (01) : 71 - 95
  • [5] Generative Adversarial Network Based Image-Scaling Attack and Defense Modeling
    Li, Junjian
    Chen, Honglong
    Li, Zhe
    Zhang, Anqing
    Wang, Xiaomeng
    Wang, Xingang
    Xia, Feng
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (01): : 861 - 873
  • [6] Adversarial organization modeling for network attack/defense
    Wu, Ji
    Ye, Chaoqun
    Jin, Shiyao
    INFORMATION SECURITY PRACTICE AND EXPERIENCE, PROCEEDINGS, 2006, 3903 : 90 - 99
  • [7] Adversarial attack and defense methods for neural network based state estimation in smart grid
    Tian, Jiwei
    Wang, Buhong
    Li, Jing
    Konstantinou, Charalambos
    IET RENEWABLE POWER GENERATION, 2022, 16 (16) : 3507 - 3518
  • [8] Backdoor attack and defense in federated generative adversarial network-based medical image synthesis
    Jin, Ruinan
    Li, Xiaoxiao
    MEDICAL IMAGE ANALYSIS, 2023, 90
  • [9] Adversarial Attack and Defense on Deep Neural Network-Based Voice Processing Systems: An Overview
    Chen, Xiaojiao
    Li, Sheng
    Huang, Hao
    APPLIED SCIENCES-BASEL, 2021, 11 (18):
  • [10] QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks
    Khalid, Faiq
    Ali, Hassan
    Tariq, Hanunad
    Hanif, Muhammad Abdullah
    Rehman, Semeen
    Ahmed, Rehan
    Shafique, Muhammad
    2019 IEEE 25TH INTERNATIONAL SYMPOSIUM ON ON-LINE TESTING AND ROBUST SYSTEM DESIGN (IOLTS 2019), 2019, : 182 - 187