Multilingual training for Software Engineering

被引:31
作者
Ahmed, Toufique [1 ]
Devanbu, Premkumar [1 ]
机构
[1] Univ Calif Davis, Davis, CA 95616 USA
来源
2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022) | 2022年
基金
美国国家科学基金会;
关键词
code summarization; code search; method name prediction; deep learning;
D O I
10.1145/3510003.3510049
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Well-trained machine-learning models, which leverage large amounts of open-source software data, have now become an interesting approach to automating many software engineering tasks. Several SE tasks have all been subject to this approach, with performance gradually improving over the past several years with better models and training methods. More, and more diverse, clean, labeled data is better for training; but constructing good-quality datasets is timeconsuming and challenging. Ways of augmenting the volume and diversity of clean, labeled data generally have wide applicability. For some languages (e.g., Ruby) labeled data is less abundant; in others (e.g., JavaScript) the available data maybe more focused on some application domains, and thus less diverse. As a way around such data bottlenecks, we present evidence suggesting that human-written code in different languages (which performs the same function), is rather similar, and particularly preserving of identifier naming patterns; we further present evidence suggesting that identifiers are a very important element of training data for software engineering tasks. We leverage this rather fortuitous phenomenon to find evidence that available multilingual training data (across different languages) can be used to amplify performance. We study this for 3 different tasks: code summarization, code retrieval, and function naming. We note that this data-augmenting approach is broadly compatible with different tasks, languages, and machine-learning models.
引用
收藏
页码:1443 / 1455
页数:13
相关论文
共 76 条
[1]  
Ahmad Wasi, 2020, P 58 ANN M ASS COMPU
[2]  
Ahmad Wasi Uddin, 2021, P 2021 C N AM CHAPT, P2655
[3]  
Ahmed T, 2022, Arxiv, DOI arXiv:2104.14671
[4]   Learning to Find Usages of Library Functions in Optimized Binaries [J].
Ahmed, Toufique ;
Devanbu, Premkumar ;
Sawant, Anand Ashok .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2022, 48 (10) :3862-3876
[5]   The Adverse Effects of Code Duplication in Machine Learning Models of Code [J].
Allamams, Miltiadis .
PROCEEDINGS OF THE 2019 ACM SIGPLAN INTERNATIONAL SYMPOSIUM ON NEW IDEAS, NEW PARADIGMS, AND REFLECTIONS ON PROGRAMMING AND SOFTWARE (ONWARD!' 19), 2019, :143-153
[6]  
Allamanis M, 2016, PR MACH LEARN RES, V48
[7]  
Alon U., 2019, ICLR
[8]   code2vec: Learning Distributed Representations of Code [J].
Alon, Uri ;
Zilberstein, Meital ;
Levy, Omer ;
Yahav, Eran .
PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2019, 3 (POPL)
[9]  
[Anonymous], 2009, P2009 C EMPIRICAL M
[10]  
[Anonymous], 2016, P COLING 2016 26 INT