CodeBERT for Code Clone Detection: A Replication Study

被引:12
作者
Arshad, Saad [1 ]
Abid, Shamsa [2 ]
Shamail, Shafay [1 ]
机构
[1] LUMS, Dept Comp Sci, SBASSE, Lahore, Pakistan
[2] Singapore Management Univ, Sch Comp & Informat Syst, Singapore, Singapore
来源
2022 IEEE 16TH INTERNATIONAL WORKSHOP ON SOFTWARE CLONES (IWSC 2022) | 2022年
关键词
Code Clone Detection; Semantic Code Clones; Deep-learning; CodeBERT; BigCloneBench; SemanticCloneBench; Android;
D O I
10.1109/IWSC55060.2022.00015
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Large pre-trained models have dramatically improved the state-of-the-art on a variety of natural language processing (NLP) tasks. CodeBERT is one such pre-trained model for natural language (NL) and programming language (PL) which captures the semantics in natural language and programming language, and produces general-purpose representations. While it has been shown to support natural language code search and code documentation generation tasks, its effectiveness for code clone detection is not explored in depth. In this paper, we aim to replicate and evaluate the performance of CodeBERT for code clone detection on multiple datasets with varying functionalities to understand (1) whether CodeBERT can generalize to unseen code, (2) how fine-tuning can effect CodeBERT's performance on unseen code, and (3) how CodeBERT performs for detecting various code clone types. To this end, we consider three different datasets of Java methods. We derive the first dataset from BigCloneBench. We use Java clone pairs from SemanticCloneBench to derive our second dataset, and our third dataset consists of Java methods from Android applications. Our experiments indicate that CodeBERT performs the best for detecting Type-1 and Type-4 clones with a 100% and 96% recall on average respectively. We also find that there is limited generalizability on unseen functionalities where recall drops by 15% and 40% on the SemanticCloneBench and Android datasets respectively. Furthermore, we observe that fine-tuning can improve the recall by 22% and 30% on the SemanticCloneBench and Android datasets respectively.
引用
收藏
页码:39 / 45
页数:7
相关论文
共 16 条
[1]   FACER: An API usage-based code-example recommender for opportunistic reuse [J].
Abid, Shamsa ;
Shamail, Shafay ;
Basit, Hamid Abdul ;
Nadi, Sarah .
EMPIRICAL SOFTWARE ENGINEERING, 2021, 26 (06)
[2]  
Al-omari F, 2020, INT WORKS SOFTW CLON, P57, DOI [10.1109/iwsc50091.2020.9047643, 10.1109/IWSC50091.2020.9047643]
[3]  
Ambient Software Evoluton Group, 2013, IJADATASET 2 0
[4]  
Anonymous, 2022, Zenodo, DOI 10.5281/ZENODO.6361315
[5]  
[Anonymous], 2021, BigCloneBench-Dataset
[6]  
Feng ZY, 2020, Arxiv, DOI [arXiv:2002.08155, 10.48550/arXiv.2002.08155]
[7]  
Guo DY, 2021, Arxiv, DOI [arXiv:2009.08366, 10.48550/ARXIV.2009.08366]
[8]   Can Neural Clone Detection Generalize to Unseen Functionalities? [J].
Liu, Chenyao ;
Lin, Zeqi ;
Lou, Jian-Guang ;
Wen, Lijie ;
Zhang, Dongmei .
2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, :617-629
[9]  
Liu YH, 2019, Arxiv, DOI arXiv:1907.11692
[10]  
Pires T, 2019, Arxiv, DOI arXiv:1906.01502