Highly Evasive Targeted Bit-Trojan on Deep Neural Networks

被引:0
|
作者
Jin, Lingxin [1 ]
Jiang, Wei [1 ]
Zhan, Jinyu [1 ]
Wen, Xiangyu [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu 610054, Peoples R China
[2] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong 999077, Peoples R China
关键词
Deep neural networks; bit-flip attack; Trojan attack; targeted bit-Trojan;
D O I
10.1109/TC.2024.3416705
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Bit-Trojan attacks based on Bit-Flip Attacks (BFAs) have emerged as severe threats to Deep Neural Networks (DNNs) deployed in safety-critical systems since they can inject Trojans during the model deployment stage without accessing training supply chains. Existing works are mainly devoted to improving the executability of Bit-Trojan attacks, while seriously ignoring the concerns on evasiveness. In this paper, we propose a highly Evasive Targeted Bit-Trojan (ETBT) with evasiveness improvements from three aspects, i.e., reducing the number of bit-flips (improving executability), smoothing activation distribution, and reducing accuracy fluctuation. Specifically, key neuron extraction is utilized to identify essential neurons from DNNs precisely and decouple the key neurons between different classes, thus improving the evasiveness regarding accuracy fluctuation and executability. Additionally, activation-constrained trigger generation is devised to eliminate the differences between activation distributions of Trojaned and clean models, which enhances evasiveness from the perspective of activation distribution. Ultimately, the strategy of constrained target bits search is designed to reduce bit-flip numbers, directly benefits the evasiveness of ETBT. Benchmark-based experiments are conducted to evaluate the superiority of ETBT. Compared with existing works, ETBT can significantly improve evasiveness-relevant performances with much lower computation overheads, better robustness, and generalizability. Our code is released at https://github.com/bluefier/ETBT.
引用
收藏
页码:2350 / 2363
页数:14
相关论文
共 50 条
  • [21] Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks
    Sharma, Hardik
    Park, Jongse
    Suda, Naveen
    Lai, Liangzhen
    Chau, Benson
    Chandra, Vikas
    Esmaeilzadeh, Hadi
    2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2018, : 764 - 775
  • [22] Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks
    Eckert, Charles
    Wang, Xiaowei
    Wang, Jingcheng
    Subramaniyan, Arun
    Iyer, Ravi
    Sylvester, Dennis
    Blaauw, David
    Das, Reetuparna
    2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2018, : 383 - 396
  • [23] Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks
    Eckert, Charles
    Wang, Xiaowei
    Wang, Jingcheng
    Subramaniyan, Arun
    Iyer, Ravi
    Sylvester, Dennis
    Blaauw, David
    Das, Reetuparna
    IEEE MICRO, 2019, 39 (03) : 11 - 19
  • [24] Design and Evaluation of a Multi-Domain Trojan Detection Method on Deep Neural Networks
    Gao, Yansong
    Kim, Yeonjae
    Doan, Bao Gia
    Zhang, Zhi
    Zhang, Gongxuan
    Nepal, Surya
    Ranasinghe, Damith C.
    Kim, Hyoungshick
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (04) : 2349 - 2364
  • [25] DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks
    Chen, Huili
    Fu, Cheng
    Zhao, Jishen
    Koushanfar, Farinaz
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4658 - 4664
  • [26] OPTIMIZING THE BIT ALLOCATION FOR COMPRESSION OF WEIGHTS AND ACTIVATIONS OF DEEP NEURAL NETWORKS
    Zhe, Wang
    Lin, Jie
    Chandrasekhar, Vijay
    Girod, Bernd
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3826 - 3830
  • [27] EncoDeep: Realizing Bit-flexible Encoding for Deep Neural Networks
    Samragh, Mohammad
    Javaheripi, Mojan
    Koushanfar, Farinaz
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2020, 19 (06)
  • [28] Bit Prudent In-Cache Acceleration of Deep Convolutional Neural Networks
    Wang, Xiaowei
    Yu, Jiecao
    Augustine, Charles
    Iyer, Ravi
    Das, Reetuparna
    2019 25TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE (HPCA), 2019, : 81 - 93
  • [29] ATTL: An Automated Targeted Transfer Learning with Deep Neural Networks
    Ahamed, Sayyed Farid
    Aggarwal, Priyanka
    Shetty, Sachin
    Lanus, Erin
    Freeman, Laura J.
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [30] Multi-Targeted Poisoning Attack in Deep Neural Networks
    Kwon H.
    Cho S.
    IEICE Transactions on Information and Systems, 2022, E105D (11): : 1916 - 1920