Towards Efficient Fine-Tuning of Language Models With Organizational Data for Automated Software Review

被引:2
作者
Nashaat, Mona [1 ]
Miller, James [2 ]
机构
[1] Port Said Univ, Dept Elect Engn, Port Said 42526, Egypt
[2] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB, Canada
关键词
Codes; Reviews; Task analysis; Data models; Large language models; Computational modeling; Training; Artificial intelligence; software engineering; large language models; reinforcement learning; software reviews; CODE;
D O I
10.1109/TSE.2024.3428324
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Large language models like BERT and GPT possess significant capabilities and potential impacts across various applications. Software engineers often use these models for code-related tasks, including generating, debugging, and summarizing code. Nevertheless, large language models still have several flaws, including model hallucination. (e.g., generating erroneous code and producing outdated and inaccurate programs) and the substantial computational resources and energy required for training and fine-tuning. To tackle these challenges, we propose CodeMentor, a framework for few-shot learning to train large language models with the data available within the organization. We employ the framework to train a language model for code review activities, such as code refinement and review generation. The framework utilizes heuristic rules and weak supervision techniques to leverage available data, such as previous review comments, issue reports, and related code updates. Then, the framework employs the constructed dataset to fine-tune LLMs for code review tasks. Additionally, the framework integrates domain expertise by employing reinforcement learning with human feedback. This allows domain experts to assess the generated code and enhance the model performance. Also, to assess the performance of the proposed model, we evaluate it with four state-of-the-art techniques in various code review tasks. The experimental results attest that CodeMentor enhances the performance in all tasks compared to the state-of-the-art approaches, with an improvement of up to 22.3%, 43.4%, and 24.3% in code quality estimation, review generation, and bug report summarization tasks, respectively.
引用
收藏
页码:2240 / 2253
页数:14
相关论文
共 77 条
  • [51] Rong X, 2016, Arxiv, DOI [arXiv:1411.2738, DOI 10.48550/ARXIV.1411.2738]
  • [52] Roziere B, 2024, Arxiv, DOI [arXiv:2308.12950, 10.48550/arXiv.2308.12950]
  • [53] Shaheen Z, 2020, Arxiv, DOI arXiv:2010.12871
  • [54] Transformer-Based Neural Network for Answer Selection in Question Answering
    Shao, Taihua
    Guo, Yupu
    Chen, Honghui
    Hao, Zepeng
    [J]. IEEE ACCESS, 2019, 7 : 26146 - 26156
  • [55] Shi ST, 2019, AAAI CONF ARTIF INTE, P4910
  • [56] Siow JK, 2020, PROCEEDINGS OF THE 2020 IEEE 27TH INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION, AND REENGINEERING (SANER '20), P284, DOI [10.1109/saner48275.2020.9054794, 10.1109/SANER48275.2020.9054794]
  • [57] Song KT, 2020, ADV NEUR IN, V33
  • [58] Using Machine Learning to Identify Code Fragments for Manual Review
    Staron, Miroslaw
    Ochodek, Miroslaw
    Meding, Wilhelm
    Soder, Ola
    [J]. 2020 46TH EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS (SEAA 2020), 2020, : 513 - 516
  • [59] Detection and Elimination of Systematic Labeling Bias in Code Reviewer Recommendation Systems
    Tecimer, K. Ayberk
    Tuzun, Eray
    Dibeklioglu, Hamdi
    Erdogmus, Hakan
    [J]. PROCEEDINGS OF EVALUATION AND ASSESSMENT IN SOFTWARE ENGINEERING (EASE 2021), 2021, : 181 - 190
  • [60] Touvron H, 2023, Arxiv, DOI arXiv:2302.13971