Towards Efficient Fine-Tuning of Language Models With Organizational Data for Automated Software Review

被引:2
作者
Nashaat, Mona [1 ]
Miller, James [2 ]
机构
[1] Port Said Univ, Dept Elect Engn, Port Said 42526, Egypt
[2] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB, Canada
关键词
Codes; Reviews; Task analysis; Data models; Large language models; Computational modeling; Training; Artificial intelligence; software engineering; large language models; reinforcement learning; software reviews; CODE;
D O I
10.1109/TSE.2024.3428324
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Large language models like BERT and GPT possess significant capabilities and potential impacts across various applications. Software engineers often use these models for code-related tasks, including generating, debugging, and summarizing code. Nevertheless, large language models still have several flaws, including model hallucination. (e.g., generating erroneous code and producing outdated and inaccurate programs) and the substantial computational resources and energy required for training and fine-tuning. To tackle these challenges, we propose CodeMentor, a framework for few-shot learning to train large language models with the data available within the organization. We employ the framework to train a language model for code review activities, such as code refinement and review generation. The framework utilizes heuristic rules and weak supervision techniques to leverage available data, such as previous review comments, issue reports, and related code updates. Then, the framework employs the constructed dataset to fine-tune LLMs for code review tasks. Additionally, the framework integrates domain expertise by employing reinforcement learning with human feedback. This allows domain experts to assess the generated code and enhance the model performance. Also, to assess the performance of the proposed model, we evaluate it with four state-of-the-art techniques in various code review tasks. The experimental results attest that CodeMentor enhances the performance in all tasks compared to the state-of-the-art approaches, with an improvement of up to 22.3%, 43.4%, and 24.3% in code quality estimation, review generation, and bug report summarization tasks, respectively.
引用
收藏
页码:2240 / 2253
页数:14
相关论文
共 77 条
  • [41] TabReformer: Unsupervised Representation Learning for Erroneous Data Detection
    Nashaat, Mona
    Ghosh, Aindrila
    Miller, James
    Quader, Shaikh
    [J]. ACM/IMS Transactions on Data Science, 2021, 2 (03):
  • [42] Asterisk: Generating Large Training Datasets with Automatic Active Supervision
    Nashaat, Mona
    Ghosh, Aindrila
    Miller, James
    Quader, Shaikh
    [J]. ACM/IMS Transactions on Data Science, 2020, 1 (02):
  • [43] Semi-Supervised Ensemble Learning for Dealing with Inaccurate and Incomplete Supervision
    Nashaat, Mona
    Ghosh, Aindrila
    Miller, James
    Quader, Shaikh
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2022, 16 (03)
  • [44] Ni AS, 2023, PR MACH LEARN RES, V202, P26106
  • [45] Ouyang L, 2022, ADV NEUR IN
  • [46] BLEU: a method for automatic evaluation of machine translation
    Papineni, K
    Roukos, S
    Ward, T
    Zhu, WJ
    [J]. 40TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, PROCEEDINGS OF THE CONFERENCE, 2002, : 311 - 318
  • [47] Pearce H, 2023, P IEEE S SECUR PRIV, P2339, DOI 10.1109/SP46215.2023.10179420
  • [48] Pu X, 2023, Arxiv, DOI arXiv:2309.09558
  • [49] A Survey on Bug Deduplication and Triage Methods from Multiple Points of View
    Qian, Cheng
    Zhang, Ming
    Nie, Yuanping
    Lu, Shuaibing
    Cao, Huayang
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (15):
  • [50] CAT-LM Training Language Models on Aligned Code And Tests
    Rao, Nikitha
    Jain, Kush
    Alon, Uri
    Le Goues, Claire
    Hellendoorn, Vincent J.
    [J]. 2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 409 - 420