On the Robustness of Code Generation Techniques: An Empirical Study on GitHub Copilot

被引:42
作者
Mastropaolo, Antonio [1 ]
Pascarella, Luca [1 ]
Guglielmi, Emanuela [2 ]
Ciniselli, Matteo [1 ]
Scalabrino, Simone [2 ]
Oliveto, Rocco [2 ]
Bavota, Gabriele [1 ]
机构
[1] Univ Svizzera Italiana USI, SEART Software Inst, Lugano, Switzerland
[2] Univ Molise, STAKE Lab, Campobasso, Italy
来源
2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE | 2023年
基金
欧洲研究理事会;
关键词
Empirical Study; Recommender Systems; USAGE;
D O I
10.1109/ICSE48619.2023.00181
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Software engineering research has always being concerned with the improvement of code completion approaches, which suggest the next tokens a developer will likely type while coding. The release of GitHub Copilot constitutes a big step forward, also because of its unprecedented ability to automatically generate even entire functions from their natural language description. While the usefulness of Copilot is evident, it is still unclear to what extent it is robust. Specifically, we do not know the extent to which semantic-preserving changes in the natural language description provided to the model have an effect on the generated code function. In this paper we present an empirical study in which we aim at understanding whether different but semantically equivalent natural language descriptions result in the same recommended function. A negative answer would pose questions on the robustness of deep learning (DL)-based code generators since it would imply that developers using different wordings to describe the same code would obtain different recommendations. We asked Copilot to automatically generate 892 Java methods starting from their original Javadoc description. Then, we generated different semantically equivalent descriptions for each method both manually and automatically, and we analyzed the extent to which predictions generated by Copilot changed. Our results show that modifying the description results in different code recommendations in similar to 46% of cases. Also, differences in the semantically equivalent descriptions might impact the correctness of the generated code (+/- 28%).
引用
收藏
页码:2149 / 2160
页数:12
相关论文
共 66 条
[21]   Are Deep Neural Networks the Best Choice for Modeling Source Code? [J].
Hellendoorn, Vincent J. ;
Devanbu, Premkumar .
ESEC/FSE 2017: PROCEEDINGS OF THE 2017 11TH JOINT MEETING ON FOUNDATIONS OF SOFTWARE ENGINEERING, 2017, :763-773
[22]  
Hindle A, 2012, PROC INT CONF SOFTW, P837, DOI 10.1109/ICSE.2012.6227135
[23]  
Howard Gavin D., 2021, GitHub copilot: Copyright, fair use, creativity, transformativity, and algorithms
[24]   Deep Code Comment Generation [J].
Hu, Xing ;
Li, Ge ;
Xia, Xin ;
Lo, David ;
Jin, Zhi .
2018 IEEE/ACM 26TH INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2018), 2018, :200-210
[25]  
huggingface.co, PEGASUS FIN TUN PAR
[26]  
Imai S, 2022, PROC IEEE ACM INT C, P319, DOI [10.1109/ICSE-Companion55297.2022.9793778, 10.1145/3510454.3522684]
[27]   The Hidden Cost of Code Completion: Understanding the Impact of the Recommendation-list Length on its Efficiency [J].
Jin, Xianhao ;
Servant, Francisco .
2018 IEEE/ACM 15TH INTERNATIONAL CONFERENCE ON MINING SOFTWARE REPOSITORIES (MSR), 2018, :70-73
[28]  
JUnit, About us
[29]  
Karampatsis RM, 2019, Arxiv, DOI arXiv:1903.05734
[30]   Adding Examples into Java']Java Documents [J].
Kim, Jinhan ;
Lee, Sanghoon ;
Hwang, Seung-won ;
Kim, Sunghun .
2009 IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, PROCEEDINGS, 2009, :540-544