ACoRA - A Platform for Automating Code Review Tasks

被引:0
作者
Ochodek, Miroslaw [1 ]
Staron, Miroslaw [2 ]
机构
[1] Poznan Univ Tech, Inst Comp Sci, Poznan, Poland
[2] Univ Gothenburg, Chalmers Univ Technol, IT Fac, Gothenburg, Sweden
关键词
code reviews; continous integration; BERT; machine learning; INTEGRATION;
D O I
10.37190/e-Inf250102
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Background: Modern Code Reviews (MCR) are frequently adopted when assuring code and design quality in continuous integration and deployment projects. Although tiresome, they serve a secondary purpose of learning about the software product. Aim: Our objective is to design and evaluate a support tool to help software developers focus on the most important code fragments to review and provide them with suggestions on what should be reviewed in this code. Method: We used design science research to develop and evaluate a tool for automating code reviews by providing recommendations for code reviewers. The tool is based on Transformer-based machine learning models for natural language processing, applied to both programming language code (patch content) and the review comments. We evaluate both the ability of the language model to match similar lines and the ability to correctly indicate the nature of the potential problems encoded in a set of categories. We evaluated the tool on two open-source projects and one industry project. Results: The proposed tool was able to correctly annotate (only true positives) 35%-41% and partially correctly annotate 76%-84% of code fragments to be reviewed with labels corresponding to different aspects of code the reviewer should focus on. Conclusion: By comparing our study to similar solutions, we conclude that indicating lines to be reviewed and suggesting the nature of the potential problems in the code allows us to achieve higher accuracy than suggesting entire changes in the code considered in other studies. Also, we have found that the differences depend more on the consistency of commenting rather than on the ability of the model to find similar lines.
引用
收藏
页数:36
相关论文
共 50 条
[31]   An Open Source C Code Generator and a Tiny Machine Learning Toolchain for the SENSIPLUS Platform [J].
Bria, A. ;
Ferrigno, L. ;
Marrocco, C. ;
Molinara, M. ;
Vitelli, M. ;
Ria, A. ;
Cicalini, M. ;
Manfredini, G. ;
Bruschi, P. .
2022 IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING (SMARTCOMP 2022), 2022, :263-268
[32]   A narrative review of recent tools and innovations toward automating living systematic reviews and evidence syntheses [J].
Schmidt, Lena ;
Sinyor, Mark ;
Webb, Roger T. ;
Marshall, Christopher ;
Knipe, Duleeka ;
Eyles, Emily C. ;
John, Ann ;
Gunnell, David ;
Higgins, Julian P. T. .
ZEITSCHRIFT FUR EVIDENZ FORTBILDUNG UND QUALITAET IM GESUNDHEITSWESEN, 2023, 181 :65-75
[33]   Using Machine Intelligence to Prioritise Code Review Requests [J].
Saini, Nishrith ;
Britto, Ricardo .
2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING IN PRACTICE (ICSE-SEIP 2021), 2021, :11-20
[34]   The Role of Actors in Platform Ecosystems: A Systematic Literature Review and Comparison Across Platform Types [J].
Kauschinger, Martin ;
Schreieck, Maximilian ;
Krcmar, Helmut .
SOFTWARE BUSINESS, ICSOB 2022, 2022, 463 :151-166
[35]   Code Generation Using Machine Learning: A Systematic Review [J].
Dehaerne, Enrique ;
Dey, Bappaditya ;
Halder, Sandip ;
De Gendt, Stefan ;
Meert, Wannes .
IEEE ACCESS, 2022, 10 :82434-82455
[36]   Impact of Gamification on Code review process - An Experimental Study [J].
Khandelwal, Shivam ;
Sripada, Sai Krishna ;
Reddy, Y. Raghu .
PROCEEDINGS OF THE 10TH INNOVATIONS IN SOFTWARE ENGINEERING CONFERENCE, 2017, :122-126
[37]   A Survey on Source Code Review Using Machine Learning [J].
Wang Xiaomeng ;
Zhang Tao ;
Xin Wei ;
Hou Changyu .
2018 3RD INTERNATIONAL CONFERENCE ON INFORMATION SYSTEMS ENGINEERING (ICISE), 2018, :56-60
[38]   The Choice of Code Review Process: A Survey on the State of the Practice [J].
Baum, Tobias ;
Lessmann, Hendrik ;
Schneider, Kurt .
PRODUCT-FOCUSED SOFTWARE PROCESS IMPROVEMENT (PROFES 2017), 2017, 10611 :111-127
[39]   Benchmarks and Metrics for Evaluations of Code Generation: A Critical Review [J].
Paul, Debalina Ghosh ;
Zhu, Hong ;
Bayley, Ian .
2024 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING, AITEST, 2024, :87-94
[40]   Semi-automating the Scoping Review Process: Is It Worthwhile? A Methodological Evaluation ( vol 36 , 131 , 2024) [J].
Zhang, Shan ;
Palaguachi, Chris ;
Pitera, Marcin ;
Jaldi, Chris Davis ;
Schroeder, Noah L. ;
Botelho, Anthony F. ;
Gladstone, Jessica R. .
EDUCATIONAL PSYCHOLOGY REVIEW, 2024, 36 (04)