Towards Efficient Fine-Tuning of Language Models With Organizational Data for Automated Software Review

被引:2
作者
Nashaat, Mona [1 ]
Miller, James [2 ]
机构
[1] Port Said Univ, Dept Elect Engn, Port Said 42526, Egypt
[2] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB, Canada
关键词
Codes; Reviews; Task analysis; Data models; Large language models; Computational modeling; Training; Artificial intelligence; software engineering; large language models; reinforcement learning; software reviews; CODE;
D O I
10.1109/TSE.2024.3428324
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Large language models like BERT and GPT possess significant capabilities and potential impacts across various applications. Software engineers often use these models for code-related tasks, including generating, debugging, and summarizing code. Nevertheless, large language models still have several flaws, including model hallucination. (e.g., generating erroneous code and producing outdated and inaccurate programs) and the substantial computational resources and energy required for training and fine-tuning. To tackle these challenges, we propose CodeMentor, a framework for few-shot learning to train large language models with the data available within the organization. We employ the framework to train a language model for code review activities, such as code refinement and review generation. The framework utilizes heuristic rules and weak supervision techniques to leverage available data, such as previous review comments, issue reports, and related code updates. Then, the framework employs the constructed dataset to fine-tune LLMs for code review tasks. Additionally, the framework integrates domain expertise by employing reinforcement learning with human feedback. This allows domain experts to assess the generated code and enhance the model performance. Also, to assess the performance of the proposed model, we evaluate it with four state-of-the-art techniques in various code review tasks. The experimental results attest that CodeMentor enhances the performance in all tasks compared to the state-of-the-art approaches, with an improvement of up to 22.3%, 43.4%, and 24.3% in code quality estimation, review generation, and bug report summarization tasks, respectively.
引用
收藏
页码:2240 / 2253
页数:14
相关论文
共 77 条
  • [1] CaPBug-A Framework for Automatic Bug Categorization and Prioritization Using NLP and Machine Learning Algorithms
    Ahmed, Hafiza Anisa
    Bawany, Narmeen Zakaria
    Shamsi, Jawwad Ahmed
    [J]. IEEE ACCESS, 2021, 9 (09): : 50496 - 50512
  • [2] Few-shot training LLMs for project-specific code-summarization
    Ahmed, Toufique
    Devanbu, Premkumar
    [J]. PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022, 2022,
  • [3] Modern Code Reviews-Survey of Literature and Practice
    Badampudi, Deepika
    Unterkalmsteiner, Michael
    Britto, Ricardo
    [J]. ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2023, 32 (04)
  • [4] Eclipse vs. Mozilla: A Comparison of Two Large-Scale Open Source Problem Report Repositories
    Banerjee, Sean
    Helmick, Jordan
    Syed, Zahid
    Cukic, Bojan
    [J]. 2015 IEEE 16TH INTERNATIONAL SYMPOSIUM ON HIGH ASSURANCE SYSTEMS ENGINEERING (HASE), 2015, : 263 - 270
  • [5] Brown TB, 2020, ADV NEUR IN, V33
  • [6] Cao J., 2020, COMPUTER VISION ECCV, P565
  • [7] A review of code reviewer recommendation studies: Challenges and future directions
    Cetin, H. Alperen
    Dogan, Emre
    Tuzun, Eray
    [J]. SCIENCE OF COMPUTER PROGRAMMING, 2021, 208
  • [8] Chatley R, 2018, 2018 25TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION AND REENGINEERING (SANER 2018), P567, DOI 10.1109/SANER.2018.8330261
  • [9] Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
  • [10] Profile based recommendation of code reviewers
    Fejzer, Mikolaj
    Przymus, Piotr
    Stencel, Krzysztof
    [J]. JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2018, 50 (03) : 597 - 619