Highly Evasive Targeted Bit-Trojan on Deep Neural Networks

被引:0
|
作者
Jin, Lingxin [1 ]
Jiang, Wei [1 ]
Zhan, Jinyu [1 ]
Wen, Xiangyu [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu 610054, Peoples R China
[2] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong 999077, Peoples R China
关键词
Deep neural networks; bit-flip attack; Trojan attack; targeted bit-Trojan;
D O I
10.1109/TC.2024.3416705
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Bit-Trojan attacks based on Bit-Flip Attacks (BFAs) have emerged as severe threats to Deep Neural Networks (DNNs) deployed in safety-critical systems since they can inject Trojans during the model deployment stage without accessing training supply chains. Existing works are mainly devoted to improving the executability of Bit-Trojan attacks, while seriously ignoring the concerns on evasiveness. In this paper, we propose a highly Evasive Targeted Bit-Trojan (ETBT) with evasiveness improvements from three aspects, i.e., reducing the number of bit-flips (improving executability), smoothing activation distribution, and reducing accuracy fluctuation. Specifically, key neuron extraction is utilized to identify essential neurons from DNNs precisely and decouple the key neurons between different classes, thus improving the evasiveness regarding accuracy fluctuation and executability. Additionally, activation-constrained trigger generation is devised to eliminate the differences between activation distributions of Trojaned and clean models, which enhances evasiveness from the perspective of activation distribution. Ultimately, the strategy of constrained target bits search is designed to reduce bit-flip numbers, directly benefits the evasiveness of ETBT. Benchmark-based experiments are conducted to evaluate the superiority of ETBT. Compared with existing works, ETBT can significantly improve evasiveness-relevant performances with much lower computation overheads, better robustness, and generalizability. Our code is released at https://github.com/bluefier/ETBT.
引用
收藏
页码:2350 / 2363
页数:14
相关论文
共 50 条
  • [31] Targeted and Automatic Deep Neural Networks Optimization for Edge Computing
    Giovannesi, Luca
    Mattia, Gabriele Proietti
    Beraldi, Roberto
    ADVANCED INFORMATION NETWORKING AND APPLICATIONS, VOL 5, AINA 2024, 2024, 203 : 57 - 68
  • [32] Compressing deep-quaternion neural networks with targeted regularisation
    Vecchi, Riccardo
    Scardapane, Simone
    Comminiello, Danilo
    Uncini, Aurelio
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2020, 5 (03) : 172 - 176
  • [33] A Novel Low-Bit Quantization Strategy for Compressing Deep Neural Networks
    Long, Xin
    Zeng, XiangRong
    Ben, Zongcheng
    Zhou, Dianle
    Zhang, Maojun
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2020, 2020 (2020)
  • [34] Bit-Quantized-Net: An Effective Method for Compressing Deep Neural Networks
    Li, Chunshan
    Du, Qing
    Xu, Xiaofei
    Zhu, Jinhui
    Chu, Dianhui
    MOBILE NETWORKS & APPLICATIONS, 2021, 26 (01): : 104 - 113
  • [35] Bit-Quantized-Net: An Effective Method for Compressing Deep Neural Networks
    Chunshan Li
    Qing Du
    Xiaofei Xu
    Jinhui Zhu
    Dianhui Chu
    Mobile Networks and Applications, 2021, 26 : 104 - 113
  • [36] Stochastic Quantization for Learning Accurate Low-Bit Deep Neural Networks
    Dong, Yinpeng
    Ni, Renkun
    Li, Jianguo
    Chen, Yurong
    Su, Hang
    Zhu, Jun
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2019, 127 (11-12) : 1629 - 1642
  • [37] An Efficient Bit-Flip Resilience Optimization Method for Deep Neural Networks
    Schorn, Christoph
    Guntoro, Andre
    Ascheid, Gerd
    2019 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2019, : 1507 - 1512
  • [38] Stochastic Quantization for Learning Accurate Low-Bit Deep Neural Networks
    Yinpeng Dong
    Renkun Ni
    Jianguo Li
    Yurong Chen
    Hang Su
    Jun Zhu
    International Journal of Computer Vision, 2019, 127 : 1629 - 1642
  • [39] Towards Low-bit Quantization of Deep Neural Networks with Limited Data
    Yuan, Yong
    Chen, Chen
    Hu, Xiyuan
    Peng, Silong
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 4377 - 4384
  • [40] Training Deep Neural Networks with 8-bit Floating Point Numbers
    Wang, Naigang
    Choi, Jungwook
    Brand, Daniel
    Chen, Chia-Yu
    Gopalakrishnan, Kailash
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31