NtNDet: Hardware Trojan detection based on pre-trained language models

被引:0
作者
Kuang, Shijie [1 ]
Quan, Zhe [1 ]
Xie, Guoqi [1 ]
Cai, Xiaomin [2 ,3 ]
Li, Keqin [4 ]
机构
[1] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Peoples R China
[2] Hunan Univ Finance & Econ, Sch Comp Sci & Technol, Changsha, Peoples R China
[3] Acad Mil Sci, Beijing, Peoples R China
[4] SUNY Coll New Paltz, Dept Comp Sci, New Paltz, NY 12561 USA
关键词
Gate-level netlists; Hardware Trojan detection; Large language model; Netlist-to-natural-language; Transfer learning;
D O I
10.1016/j.eswa.2025.126666
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hardware Trojans (HTs) are malicious modifications embedded in Integrated Circuits (ICs) that pose a significant threat to security. The concealment of HTs and the complexity of IC manufacturing make them difficult to detect. An effective solution is identifying HTs at the gate level through machine learning techniques. However, current methods primarily depend on end-to-end training, which fails to fully utilize the advantages of large-scale pre-trained models and transfer learning. Additionally, they do not take advantage of the extensive background knowledge available in massive datasets. This study proposes an HT detection approach based on large-scale pre-trained NLP models. We propose a novel approach named NtNDet, which includes a method called Netlist-to-Natural-Language (NtN) for converting gate-level netlists into a natural language format suitable for Natural Language Processing (NLP) models. We apply the self-attention mechanism of Transformer to model complex dependencies within the netlist. This is the first application of large-scale pre- trained models for gate-level netlists HT detection, promoting the use of pre-trained models in the security field. Experiments on the Trust-Hub, TRIT-TC, and TRIT-TS benchmarks demonstrate that our approach outperforms existing HT detection methods. The precision increased by at least 5.27%, The True Positive Rate (TPR) by 3.06%, the True Negative Rate (TNR) by 0.01%, and the F1 score increased by about 3.17%, setting a new state-of-the-art in HT detection.
引用
收藏
页数:13
相关论文
共 48 条
[41]   A Comprehensive Survey on Graph Neural Networks [J].
Wu, Zonghan ;
Pan, Shirui ;
Chen, Fengwen ;
Long, Guodong ;
Zhang, Chengqi ;
Yu, Philip S. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (01) :4-24
[42]   Ten years of hardware Trojans: a survey from the attacker's perspective [J].
Xue, Mingfu ;
Gu, Chongyan ;
Liu, Weiqiang ;
Yu, Shichao ;
O'Neill, Maire .
IET COMPUTERS AND DIGITAL TECHNIQUES, 2020, 14 (06) :231-246
[43]  
Yasaei R., 2022, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
[44]  
Yasaei R, 2021, PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), P1504, DOI 10.23919/DATE51398.2021.9474174
[45]   SeGa: A Trojan Detection Method Combined With Gate Semantics [J].
Ye, Yunying ;
Li, Shan ;
Shen, Haihua ;
Li, Huawei ;
Li, Xiaowei .
2021 IEEE 30TH ASIAN TEST SYMPOSIUM (ATS 2021), 2021, :43-48
[46]   Deep Learning-Based Hardware Trojan Detection With Block-Based Netlist Information Extraction [J].
Yu, Shichao ;
Gu, Chongyan ;
Liu, Weiqiang ;
O'Neill, Maire .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2022, 10 (04) :1837-1853
[47]   Graph neural networks: A review of methods and applications [J].
Zhou, Jie ;
Cui, Ganqu ;
Hu, Shengding ;
Zhang, Zhengyan ;
Yang, Cheng ;
Liu, Zhiyuan ;
Wang, Lifeng ;
Li, Changcheng ;
Sun, Maosong .
AI OPEN, 2020, 1 :57-81
[48]   Transfer Learning in Deep Reinforcement Learning: A Survey [J].
Zhu, Zhuangdi ;
Lin, Kaixiang ;
Jain, Anil K. ;
Zhou, Jiayu .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) :13344-13362