DeVAIC: : A tool for security assessment of AI-generated code

被引:1
|
作者
Cotroneo, Domenico [1 ]
De Luca, Roberta [1 ]
Liguori, Pietro [1 ]
机构
[1] Univ Naples Federico II, I-80125 Naples, Italy
关键词
Static code analysis; Vulnerability detection; AI-code generators; !text type='Python']Python[!/text; STATIC ANALYSIS; CHATGPT;
D O I
10.1016/j.infsof.2024.107572
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Context: AI code generators are revolutionizing code writing and software development, but their training on large datasets, including potentially untrusted source code, raises security concerns. Furthermore, these generators can produce incomplete code snippets that are challenging to evaluate using current solutions. Objective: This research work introduces DeVAIC (Detection of Vulnerabilities in AI-generated Code), a tool to evaluate the security of AI-generated Python code, which overcomes the challenge of examining incomplete code. Methods: We followed a methodological approach that involved gathering vulnerable samples, extracting implementation patterns, and creating regular expressions to develop the proposed tool. The implementation of DeVAIC includes a set of detection rules based on regular expressions that cover 35 Common Weakness Enumerations (CWEs) falling under the OWASP Top 10 vulnerability categories. Results: We utilized four popular AI models to generate Python code, which we then used as a foundation to evaluate the effectiveness of our tool. DeVAIC demonstrated a statistically significant difference in its ability to detect security vulnerabilities compared to the state-of-the-art solutions, showing an F 1 Score and Accuracy of 94% while maintaining a low computational cost of 0.14 s per code snippet, on average. Conclusions: The proposed tool provides a lightweight and efficient solution for vulnerability detection even on incomplete code.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] AI vs. AI: Can AI Detect AI-Generated Images?
    Baraheem, Samah S.
    Nguyen, Tam V.
    JOURNAL OF IMAGING, 2023, 9 (10)
  • [42] How secure is AI-generated code: a large-scale comparison of large language models
    Tihanyi, Norbert
    Bisztray, Tamas
    Ferrag, Mohamed Amine
    Jain, Ridhi
    Cordeiro, Lucas C.
    EMPIRICAL SOFTWARE ENGINEERING, 2025, 30 (02)
  • [43] AI Usage Cards: Responsibly Reporting AI-generated Content
    Wahle, Jan Philip
    Ruas, Terry
    Mohammad, Saif M.
    Meuschke, Norman
    Gipp, Bela
    2023 ACM/IEEE JOINT CONFERENCE ON DIGITAL LIBRARIES, JCDL, 2023, : 282 - 284
  • [44] Assessing the laboratory performance of AI-generated enzymes
    Zelezniak, Aleksej
    Yang, Kevin K.
    Johnson, Sean
    NATURE BIOTECHNOLOGY, 2024, 43 (3) : 308 - 309
  • [45] ChatGPT, AI-generated content, and engineering management
    Zuge Yu
    Yeming Gong
    Frontiers of Engineering Management, 2024, 11 : 159 - 166
  • [46] Astronomers explore uses for AI-generated images
    Castelvecchi, Davide
    NATURE, 2017, 542 (7639) : 16 - 17
  • [47] Presentation matters for AI-generated clinical advice
    Marzyeh Ghassemi
    Nature Human Behaviour, 2023, 7 : 1833 - 1835
  • [48] Auto articles: an experiment in AI-generated content
    Catherine Armitage
    Markus Kaindl
    Nature, 2020, 588 (7837) : S138 - S141
  • [49] Towards Detection of AI-Generated Texts and Misinformation
    Najee-Ullah, Ahmad
    Landeros, Luis
    Balytskyi, Yaroslav
    Chang, Sang-Yoon
    SOCIO-TECHNICAL ASPECTS IN SECURITY, STAST 2021, 2022, 13176 : 194 - 205
  • [50] AI-Generated Media for Exploring Alternate Realities
    Dunnell, Kevin
    Agarwal, Gauri
    Pataranutaporn, Pat
    Lippman, Andrew
    Maes, Pattie
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,