Large language models for code completion: A systematic literature review

被引:5
作者
Husein, Rasha Ahmad [1 ]
Aburajouh, Hala [1 ]
Catal, Cagatay [1 ]
机构
[1] Qatar Univ, Dept Comp Sci & Engn, Doha, Qatar
关键词
Code completion; Large language models; Deep learning; Transformers;
D O I
10.1016/j.csi.2024.103917
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Code completion serves as a fundamental aspect of modern software development, improving developers' coding processes. Integrating code completion tools into an Integrated Development Environment (IDE) or code editor enhances the coding process and boosts productivity by reducing errors and speeding up code writing while reducing cognitive load. This is achieved by predicting subsequent tokens, such as keywords, variable names, types, function names, operators, and more. Different techniques can achieve code completion, and recent research has focused on Deep Learning methods, particularly Large Language Models (LLMs) utilizing Transformer algorithms. While several research papers have focused on the use of LLMs for code completion, these studies are fragmented, and there is no systematic overview of the use of LLMs for code completion. Therefore, we aimed to perform a Systematic Literature Review (SLR) study to investigate how LLMs have been applied for code completion so far. We have formulated several research questions to address how LLMs have been integrated for code completion-related tasks and to assess the efficacy of these LLMs in the context of code completion. To achieve this, we retrieved 244 papers from scientific databases using auto-search and specific keywords, finally selecting 23 primary studies based on an SLR methodology for in-depth analysis. This SLR study categorizes the granularity levels of code completion achieved by utilizing LLMs in IDEs, explores the existing issues in current code completion systems, how LLMs address these challenges, and the pre-training and fine-tuning methods employed. Additionally, this study identifies open research problems and outlines future research directions. Our analysis reveals that LLMs significantly enhance code completion performance across several programming languages and contexts, and their capability to predict relevant code snippets based on context and partial input boosts developer productivity substantially.
引用
收藏
页数:15
相关论文
共 45 条
[1]   Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binaries [J].
Al-Kaswan, Ali ;
Ahmed, Toufique ;
Izadi, Maliheh ;
Sawant, Anand Ashok ;
Devanbu, Premkumar ;
van Deursen, Arie .
2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION AND REENGINEERING, SANER, 2023, :260-271
[2]  
Alexander Francis, 2022, 2022 1st International Conference on Software Engineering and Information Technology (ICoSEIT), P85, DOI 10.1109/ICoSEIT55604.2022.10029949
[3]   DCServCG: A data-centric service code generation using deep learning [J].
Alizadehsani, Zakieh ;
Ghaemi, Hadi ;
Shahraki, Amin ;
Gonzalez-Briones, Alfonso ;
Corchado, Juan M. .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 123
[4]   Next Syntactic-Unit Code Completion and Applications [J].
Anh Tuan Nguyen ;
Yadavally, Aashish ;
Nguyen, Tien N. .
PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022, 2022,
[5]   A Survey on Evaluation of Large Language Models [J].
Chang, Yupeng ;
Wang, Xu ;
Wang, Jindong ;
Wu, Yuan ;
Yang, Linyi ;
Zhu, Kaijie ;
Chen, Hao ;
Yi, Xiaoyuan ;
Wang, Cunxiang ;
Wang, Yidong ;
Ye, Wei ;
Zhang, Yue ;
Chang, Yi ;
Yu, Philip S. ;
Yang, Qiang ;
Xie, Xing .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (03)
[6]   An Empirical Study on the Usage of Transformer Models for Code Completion [J].
Ciniselli, Matteo ;
Cooper, Nathan ;
Pascarella, Luca ;
Mastropaolo, Antonio ;
Aghajani, Emad ;
Poshyvanyk, Denys ;
Di Penta, Massimiliano ;
Bavota, Gabriele .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2022, 48 (12) :4818-4837
[7]   An Empirical Study on the Usage of BERT Models for Code Completion [J].
Ciniselli, Matteo ;
Cooper, Nathan ;
Pascarella, Luca ;
Poshyvanyk, Denys ;
Di Penta, Massimiliano ;
Bavota, Gabriele .
2021 IEEE/ACM 18TH INTERNATIONAL CONFERENCE ON MINING SOFTWARE REPOSITORIES (MSR 2021), 2021, :108-119
[8]   Automated Source Code Generation and Auto-Completion Using Deep Learning: Comparing and Discussing Current Language Model-Related Approaches [J].
Cruz-Benito, Juan ;
Vishwakarma, Sanjay ;
Martin-Fernandez, Francisco ;
Faro, Ismael .
AI, 2021, 2 (01) :1-16
[9]  
Diera A, 2023, P 1 GENBENCH WORKSH, P12
[10]   An Empirical Investigation on the Performance of Domain Adaptation for T5 Code Completion [J].
Fukumoto, Daisuke ;
Kashiwa, Yutaro ;
Hirao, Toshiki ;
Fujiwara, Kenji ;
Iida, Hajimu .
2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION AND REENGINEERING, SANER, 2023, :693-697