Harnessing Large Language Models for Software Vulnerability Detection: A Comprehensive Benchmarking Study

被引:0
作者
Tamberg, Karl [1 ]
Bahsi, Hayretdin [1 ,2 ]
机构
[1] Tallinn Univ Technol, Sch Informat Technol, Tallinn 12618, Estonia
[2] No Arizona Univ, Sch Informat Comp & Cyber Syst, Flagstaff, AZ 86011 USA
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Benchmarking; large language models; LLM; prompting; software vulnerabilities; static code analyser; TOOLS;
D O I
10.1109/ACCESS.2025.3541146
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite various approaches being employed to detect software vulnerabilities, the number of reported software vulnerabilities shows an upward trend over the years. This suggests the problems are not caught before the code is released, which could be caused by many factors, like lack of awareness, limited efficacy of the existing vulnerability detection tools or the tools not being user-friendly. To help combat some issues with traditional vulnerability detection tools, we propose using large language models (LLMs) to assist in finding vulnerabilities in source code. LLMs have shown a remarkable ability to understand and generate code, underlining their potential in code-related tasks. The aim is to test multiple state-of-the-art LLMs and identify the best prompting strategies, allowing extraction of the best value from the LLMs. We leverage findings from prompting-focused research, benchmarking approaches like chain of thought, tree of thought and self-consistency for vulnerability detection use-cases. We provide an overview of the strengths and weaknesses of the LLM-based approach and compare the results to those of traditional static analysis tools. We find that LLMs can pinpoint more issues than traditional static analysis tools, outperforming traditional tools in terms of recall and F1 scores. However, LLMs are more prone to generate false positive classifications than traditional tools. The experiments are conducted using the Java programming language and the results should benefit software developers and security analysts responsible for ensuring that the code is free of vulnerabilities.
引用
收藏
页码:29698 / 29717
页数:20
相关论文
共 64 条
  • [1] Ahmad N. H., 2010, P C ENG TECHN ED, P1
  • [2] Akcura K., 2015, Tech. Rep.
  • [3] Almubairik NA, 2016, INT CONF INTERNET, P413, DOI 10.1109/ICITST.2016.7856742
  • [4] Bug detection in Java']Java code: An extensive evaluation of static analysis tools using Juliet Test Suites
    Amankwah, Richard
    Chen, Jinfu
    Song, Heping
    Kudjo, Patrick Kwaku
    [J]. SOFTWARE-PRACTICE & EXPERIENCE, 2023, 53 (05) : 1125 - 1143
  • [5] [Anonymous], 2017, Juliet Java 1.3
  • [6] [Anonymous], 2019, We Need a Safer Systems Programming Language
  • [7] Deep Learning Based Vulnerability Detection: Are We There Yet?
    Chakraborty, Saikat
    Krishna, Rahul
    Ding, Yangruibo
    Ray, Baishakhi
    [J]. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2022, 48 (09) : 3280 - 3296
  • [8] Chen Mark, 2021, PREPRINT
  • [9] Cheshkov A, 2023, Arxiv, DOI arXiv:2304.07232
  • [10] CodeQL, about us