TransformCode: A Contrastive Learning Framework for Code Embedding via Subtree Transformation

被引:2
作者
Xian, Zixiang [1 ]
Huang, Rubing [1 ]
Towey, Dave [2 ]
Fang, Chunrong [3 ]
Chen, Zhenyu [3 ]
机构
[1] Macau Univ Sci & Technol, Sch Comp Sci & Engn, Macau 999078, Peoples R China
[2] Univ Nottingham Ningbo China, Sch Comp Sci, Ningbo 315100, Zhejiang, Peoples R China
[3] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
关键词
Codes; Task analysis; Self-supervised learning; Syntactics; Semantics; Vectors; Training; Code embedding; transformer; abstract syntax tree; contrastive learning; NETWORKS;
D O I
10.1109/TSE.2024.3393419
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Artificial intelligence (AI) has revolutionized software engineering (SE) by enhancing software development efficiency. The advent of pre-trained models (PTMs) leveraging transfer learning has significantly advanced AI for SE. However, existing PTMs that operate on individual code tokens suffer from several limitations: They are costly to train and fine-tune; and they rely heavily on labeled data for fine-tuning on task-specific datasets. In this paper, we present TransformCode, a novel framework that learns code embeddings in a contrastive learning manner. Our framework is encoder-agnostic and language-agnostic, which means that it can leverage any encoder model and handle any programming language. We also propose a novel data-augmentation technique called abstract syntax tree (AST) transformation, which applies syntactic and semantic transformations to the original code snippets, to generate more diverse and robust samples for contrastive learning. Our framework has several advantages over existing methods: (1) It is flexible and adaptable, because it can easily be extended to other downstream tasks that require code representation (such as code-clone detection and classification); (2) it is efficient and scalable, because it does not require a large model or a large amount of training data, and it can support any programming language; (3) it is not limited to unsupervised learning, but can also be applied to some supervised learning tasks by incorporating task-specific labels or objectives; and (4) it can also adjust the number of encoder parameters based on computing resources. We evaluate our framework on several code-related tasks, and demonstrate its effectiveness and superiority over the state-of-the-art methods such as SourcererCC, Code2vec, and InferCode.
引用
收藏
页码:1600 / 1619
页数:20
相关论文
共 73 条
[1]  
Ahmad W., 2020, P 58 ANN M ASS COMPU, P4998, DOI DOI 10.18653/V1/2020.ACL-MAIN.449
[2]  
Ahmad WU, 2021, 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), P2655
[3]  
Allamanis M, 2018, Arxiv, DOI arXiv:1711.00740
[4]  
Alon U.., 2019, INT C LEARN REPR
[5]   code2vec: Learning Distributed Representations of Code [J].
Alon, Uri ;
Zilberstein, Meital ;
Levy, Omer ;
Yahav, Eran .
PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2019, 3 (POPL)
[6]   BERT-Based Sentiment Analysis: A Software Engineering Perspective [J].
Batra, Himanshu ;
Punn, Narinder Singh ;
Sonbhadra, Sanjay Kumar ;
Agarwal, Sonali .
DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2021, PT I, 2021, 12923 :138-148
[7]  
Bui NDQ, 2023, Arxiv, DOI arXiv:2306.00029
[8]   InferCode: Self-Supervised Learning of Code Representations by Predicting Subtrees [J].
Bui, Nghi D. Q. ;
Yu, Yijun ;
Jiang, Lingxiao .
2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2021), 2021, :1186-1197
[9]  
Bui NDQ, 2021, AAAI CONF ARTIF INTE, V35, P30
[10]   Deep Learning Based Vulnerability Detection: Are We There Yet? [J].
Chakraborty, Saikat ;
Krishna, Rahul ;
Ding, Yangruibo ;
Ray, Baishakhi .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2022, 48 (09) :3280-3296