Multitask-Based Evaluation of Open-Source LLM on Software Vulnerability

被引:0
作者
Yin, Xin [1 ]
Ni, Chao [1 ,2 ]
Wang, Shaohua [3 ]
机构
[1] Zhejiang Univ, State Key Lab Blockchain & Data Secur, Hangzhou 310007, Peoples R China
[2] Hangzhou High Tech Zone Binjiang Inst Blockchain &, Hangzhou 310051, Peoples R China
[3] Cent Univ Finance & Econ, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Software; Training; Biological system modeling; Codes; Software quality; Large language models; Source coding; Software systems; Software engineering; Nickel; Software vulnerability analysis; large language model;
D O I
10.1109/TSE.2024.3470333
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
This paper proposes a pipeline for quantitatively evaluating interactive Large Language Models (LLMs) using publicly available datasets. We carry out an extensive technical evaluation of LLMs using Big-Vul covering four different common software vulnerability tasks. This evaluation assesses the multi-tasking capabilities of LLMs based on this dataset. We find that the existing state-of-the-art approaches and pre-trained Language Models (LMs) are generally superior to LLMs in software vulnerability detection. However, in software vulnerability assessment and location, certain LLMs (e.g., CodeLlama and WizardCoder) have demonstrated superior performance compared to pre-trained LMs, and providing more contextual information can enhance the vulnerability assessment capabilities of LLMs. Moreover, LLMs exhibit strong vulnerability description capabilities, but their tendency to produce excessive output significantly weakens their performance compared to pre-trained LMs. Overall, though LLMs perform well in some aspects, they still need improvement in understanding the subtle differences in code vulnerabilities and the ability to describe vulnerabilities to fully realize their potential. Our evaluation pipeline provides valuable insights into the capabilities of LLMs in handling software vulnerabilities.
引用
收藏
页码:3071 / 3087
页数:17
相关论文
共 80 条
  • [1] Aghajanyan A, 2022, Arxiv, DOI [arXiv:2201.07520, 10.48550/arXiv.2201.07520]
  • [2] Ahmad WU, 2021, Arxiv, DOI [arXiv:2103.06333, 10.48550/arXiv.2103.06333]
  • [3] [Anonymous], "DeepSeek coder: Let the code write itself
  • [4] [Anonymous], 2023, "Hugging face open LLM leaderboard
  • [5] [Anonymous], 2023, Common Vulnerabilities and Exposures Website
  • [6] [Anonymous], 2023, HUGGING FACE
  • [7] [Anonymous], 2023, Phi-2: The Surprising Power of Small Language Models
  • [8] Bang Y, 2023, Arxiv, DOI [arXiv:2302.04023, 10.48550/arXiv.2302.04023]
  • [9] CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software
    Bhandari, Guru
    Naseer, Amara
    Moonen, Leon
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PREDICTIVE MODELS AND DATA ANALYTICS IN SOFTWARE ENGINEERING (PROMISE '21), 2021, : 30 - 39
  • [10] Brown T. B., 1901, NIPS, P1877