SPELL: An End-to-End Tool Flow for LLM-Guided Secure SoC Design for Embedded Systems

被引:0
作者
Paria, Sudipta [1 ]
Dasgupta, Aritra [1 ]
Bhunia, Swarup [1 ]
机构
[1] Univ Florida, Dept Elect & Comp Engn, Gainesville, FL 32611 USA
关键词
Embedded systems; Prevention and mitigation; Large language models; Network-on-chip; Manuals; Intellectual property; Hardware; Security; Internet of Things; Microprogramming; Assertion-based verification (ABV); common weakness enumerations (CWEs); large language models (LLMs); security policies; system-on-chip (SoC) security;
D O I
10.1109/LES.2024.3447691
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Modern embedded systems and Internet of Things (IoT) devices contain system-on-chips (SoCs) as their hardware backbone, which increasingly contain many critical assets (secure communication keys, configuration bits, firmware, sensitive data, etc.). These critical assets must be protected against wide array of potential vulnerabilities to uphold the system's confidentiality, integrity, and availability. Today's SoC designs contain diverse intellectual property (IP) blocks, often acquired from multiple 3rd-party IP vendors. Secure hardware design using them inevitably relies on the accrued domain knowledge of well-trained security experts. In this letter, we introduce SPELL, a novel end-to-end framework for the automated development of secure SoC designs. It leverages conversational large language models (LLMs) to automatically identify security vulnerabilities in a target SoC and map them to the evolving database of common weakness enumerations (CWEs); SPELL then filters the relevant CWEs, subsequently converting them to systemverilog assertions (SVAs) for verification; and finally, addresses the vulnerabilities via centralized security policy enforcement. We have implemented the SPELL framework using popular LLMs, such as ChatGPT and GEMINI, to analyze their efficacy in generating appropriate CWEs from user-defined SoC specifications and implement corresponding security policies for an open-source SoC benchmark. We have also explored the limitations of existing pretrained conversational LLMs in this context.
引用
收藏
页码:365 / 368
页数:4
相关论文
共 13 条
[1]  
Ahmad B., 2022, P ICCAD, P1
[2]  
Ahmad B, 2023, Arxiv, DOI [arXiv:2302.01215, 10.48550/arXiv.2302.01215, DOI 10.48550/ARXIV.2302.01215]
[3]   Hunting Security Bugs in SoC Designs: Lessons Learned [J].
Bidmeshki, Mohammad Mahdi ;
Zhang, Yunjie ;
Zaman, Monir ;
Zhou, Liwei ;
Makris, Yiorgos .
IEEE DESIGN & TEST, 2021, 38 (01) :22-29
[4]  
Dessouky G, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P213
[5]  
Kande R, 2024, Arxiv, DOI arXiv:2306.14027
[6]  
Karri R., 2023, P MLCAD
[7]  
Liu MJ, 2024, Arxiv, DOI arXiv:2311.00176
[8]  
Liu MJ, 2023, Arxiv, DOI arXiv:2309.07544
[9]  
Meng XY, 2023, Arxiv, DOI [arXiv:2308.11042, 10.48550/arXiv.2308.11042, DOI 10.48550/ARXIV.2308.11042]
[10]  
Orenes-Vera M, 2023, Arxiv, DOI arXiv:2309.09437