DeVAIC: : A tool for security assessment of AI-generated code

被引:1
|
作者
Cotroneo, Domenico [1 ]
De Luca, Roberta [1 ]
Liguori, Pietro [1 ]
机构
[1] Univ Naples Federico II, I-80125 Naples, Italy
关键词
Static code analysis; Vulnerability detection; AI-code generators; !text type='Python']Python[!/text; STATIC ANALYSIS; CHATGPT;
D O I
10.1016/j.infsof.2024.107572
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Context: AI code generators are revolutionizing code writing and software development, but their training on large datasets, including potentially untrusted source code, raises security concerns. Furthermore, these generators can produce incomplete code snippets that are challenging to evaluate using current solutions. Objective: This research work introduces DeVAIC (Detection of Vulnerabilities in AI-generated Code), a tool to evaluate the security of AI-generated Python code, which overcomes the challenge of examining incomplete code. Methods: We followed a methodological approach that involved gathering vulnerable samples, extracting implementation patterns, and creating regular expressions to develop the proposed tool. The implementation of DeVAIC includes a set of detection rules based on regular expressions that cover 35 Common Weakness Enumerations (CWEs) falling under the OWASP Top 10 vulnerability categories. Results: We utilized four popular AI models to generate Python code, which we then used as a foundation to evaluate the effectiveness of our tool. DeVAIC demonstrated a statistically significant difference in its ability to detect security vulnerabilities compared to the state-of-the-art solutions, showing an F 1 Score and Accuracy of 94% while maintaining a low computational cost of 0.14 s per code snippet, on average. Conclusions: The proposed tool provides a lightweight and efficient solution for vulnerability detection even on incomplete code.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Generative AI and AI-generated Contents on Social Media
    Wang, Yichuan
    Su, Yiran
    Proceedings of the Annual Hawaii International Conference on System Sciences, 2024, : 2714 - 2715
  • [32] AI-generated questions for urological competency assessment: a prospective educational study
    Mert Başaranoğlu
    Erdem Akbay
    Erim Erdem
    BMC Medical Education, 25 (1)
  • [33] Gender stereotypes in AI-generated images
    Garcia-Ull, Francisco-Jose
    Melero-Lazaro, Monica
    PROFESIONAL DE LA INFORMACION, 2023, 32 (05):
  • [34] AI-Generated Books: Blueprint for the Future?
    Allen, Katherine
    ECONTENT, 2019, 42 (03) : 10 - 10
  • [35] Avoid patenting AI-generated inventions
    Daniel Gervais
    Nature, 2023, 622 : 31 - 31
  • [36] Physical Layer Security for AI-Generated Content: Power and Elements Allocation for Active RIS
    Duan, Junhao
    Zhang, Ying
    Gu, Jinyuan
    Zhang, Lei
    Duan, Wei
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 839 - 848
  • [37] Comparing Student Preferences for AI-Generated and Peer-Generated Feedback in AI-driven Formative Peer Assessment
    Shin, Insub
    Hwang, Su Bhin
    Yoo, Yun Joo
    Bae, Sooan
    Kim, Rae Yeong
    FIFTEENTH INTERNATIONAL CONFERENCE ON LEARNING ANALYTICS & KNOWLEDGE, LAK 2025, 2025, : 159 - 169
  • [38] Computational Power and Subjective Quality of AI-Generated Outputs: The Case of Aesthetic Judgement and Positive Emotions in AI-Generated Art
    Grassini, Simone
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2024,
  • [39] All the News That's Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation
    Kreps, Sarah
    McCain, R. Miles
    Brundage, Miles
    JOURNAL OF EXPERIMENTAL POLITICAL SCIENCE, 2022, 9 (01) : 104 - 117
  • [40] Quality Assessment of AI-Generated Image Based on Cross-modal Correlation
    Zhang, Yunhao
    Jia, Menglin
    Zhou, Wenbo
    Yang, Yang
    2024 3RD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND MEDIA COMPUTING, ICIPMC 2024, 2024, : 378 - 384