Embedding Java']Java Classes with code2vec: Improvements from Variable Obfuscation

被引:35
作者
Compton, Rhys [1 ]
Frank, Eibe [1 ]
Patros, Panos [1 ]
Koay, Abigail [1 ]
机构
[1] Univ Waikato, Hamilton, New Zealand
来源
2020 IEEE/ACM 17TH INTERNATIONAL CONFERENCE ON MINING SOFTWARE REPOSITORIES, MSR | 2020年
关键词
machine learning; neural networks; code2vec; source code; code obfuscation;
D O I
10.1145/3379597.3387445
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Automatic source code analysis in key areas of software engineering, such as code security, can benefit from Machine Learning (ML). However, many standard ML approaches require a numeric representation of data and cannot be applied directly to source code. Thus, to enable ML, we need to embed source code into numeric feature vectors while maintaining the semantics of the code as much as possible. code2vec is a recently released embedding approach that uses the proxy task of method name prediction to map Java methods to feature vectors. However, experimentation with code2vec shows that it learns to rely on variable names for prediction, causing it to be easily fooled by typos or adversarial attacks. Moreover, it is only able to embed individual Java methods and cannot embed an entire collection of methods such as those present in a typical Java class, making it difficult to perform predictions at the class level (e.g., for the identification of malicious Java classes). Both shortcomings are addressed in the research presented in this paper. We investigate the effect of obfuscating variable names during training of a code2vec model to force it to rely on the structure of the code rather than specific names and consider a simple approach to creating class-level embeddings by aggregating sets of method embeddings. Our results, obtained on a challenging new collection of source-code classification problems, indicate that obfuscating variable names produces an embedding model that is both impervious to variable naming and more accurately reflects code semantics. The datasets, models, and code are shared(1) for further ML research on source code.
引用
收藏
页码:243 / 253
页数:11
相关论文
共 22 条
[1]  
Allamanis M, 2016, PR MACH LEARN RES, V48
[2]   Learning Natural Coding Conventions [J].
Allamanis, Miltiadis ;
Barr, Earl T. ;
Bird, Christian ;
Sutton, Charles .
22ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (FSE 2014), 2014, :281-293
[3]  
Alon U., 2018, arXiv
[4]   code2vec: Learning Distributed Representations of Code [J].
Alon, Uri ;
Zilberstein, Meital ;
Levy, Omer ;
Yahav, Eran .
PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2019, 3 (POPL)
[5]  
Alon U, 2018, ACM SIGPLAN NOTICES, V53, P404, DOI [10.1145/3296979.3192412, 10.1145/3192366.3192412]
[6]  
[Anonymous], 2016, The WEKA workbench. Online Appendix for "Data Mining: Practical Machine Learning Tools and Techniques
[7]  
[Anonymous], 2007, International Journal of Digital Evidence
[8]   Clone detection using abstract syntax trees [J].
Baxter, ID ;
Yahin, A ;
Moura, L ;
Sant'Anna, M ;
Bier, L .
INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE, PROCEEDINGS, 1998, :368-377
[9]   Comparison of classification accuracy using Cohen's Weighted Kappa [J].
Ben-David, Arie .
EXPERT SYSTEMS WITH APPLICATIONS, 2008, 34 (02) :825-832
[10]   The single-cell transcriptional landscape of mammalian organogenesis [J].
Cao, Junyue ;
Spielmann, Malte ;
Qiu, Xiaojie ;
Huang, Xingfan ;
Ibrahim, Daniel M. ;
Hill, Andrew J. ;
Zhang, Fan ;
Mundlos, Stefan ;
Christiansen, Lena ;
Steemers, Frank J. ;
Trapnell, Cole ;
Shendure, Jay .
NATURE, 2019, 566 (7745) :496-+