How Important Are Good Method Names in Neural Code Generation? A Model Robustness Perspective

被引:5
作者
Yang, Guang [1 ]
Zhou, Yu [1 ]
Yang, Wenhua [1 ]
Yue, Tao [2 ]
Chen, Xiang [3 ]
Chen, Taolue [4 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Nanjing, Peoples R China
[2] Beihang Univ, Beijing, Peoples R China
[3] Nantong Univ, Nantong, Peoples R China
[4] Birkbeck Univ London, London, England
基金
中国国家自然科学基金;
关键词
Code generation; adversarial examples; robustness; passive defense; pre-trained model;
D O I
10.1145/3630010
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Pre-trained code generation models (PCGMs) have been widely applied in neural code generation, which can generate executable code from functional descriptions in natural languages, possibly together with signatures. Despite substantial performance improvement of PCGMs, the role of method names in neural code generation has not been thoroughly investigated. In this article, we study and demonstrate the potential of benefiting from method names to enhance the performance of PCGMs from a model robustness perspective. Specifically, we propose a novel approach, named neuRAl coDe generAtor Robustifier (RADAR). RADAR consists of two components: RADAR-Attack and RADAR-Defense. The former attacks a PCGM by generating adversarial method names as part of the input, which are semantic and visual similar to the original input but may trick the PCGM to generate completely unrelated code snippets. As a countermeasure to such attacks, RADAR-Defense synthesizes a newmethod name fromthe functional description and supplies it to the PCGM. Evaluation results show that RADAR-Attack can reduce the CodeBLEU of generated code by 19.72% to 38.74% in three state-of-the-art PCGMs (i.e., CodeGPT, PLBART, and CodeT5) in the fine-tuning code generation task and reduce the Pass@1 of generated code by 32.28% to 44.42% in three state-of-the-art PCGMs (i.e., Replit, CodeGen, and CodeT5+) in the zero-shot code generation task. Moreover, RADAR-Defense is able to reinstate the performance of PCGMs with synthesized method names. These results highlight the importance of good method names in neural code generation and implicate the benefits of studying model robustness in software engineering.
引用
收藏
页数:35
相关论文
共 104 条
  • [1] Agashe R, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P5436
  • [2] Ahmad WU, 2021, 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), P2655
  • [3] An information-theoretic perspective of tf-idf measures
    Aizawa, A
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2003, 39 (01) : 45 - 65
  • [4] A Survey of Machine Learning for Big Code and Naturalness
    Allamanis, Miltiadis
    Barr, Earl T.
    Devanbu, Premkumar
    Sutton, Charles
    [J]. ACM COMPUTING SURVEYS, 2018, 51 (04)
  • [5] Alzantot M, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P2890
  • [6] Assessing Robustness of ML-Based Program Analysis Tools using Metamorphic Program Transformations
    Applis, Leonhard
    Panichella, Annibale
    van Deursen, Arie
    [J]. 2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 1377 - 1381
  • [7] Austin J., 2021, arXiv
  • [8] Bahrami Mehdi, 2021, arXiv
  • [9] Bielik P, 2020, PR MACH LEARN RES, V119
  • [10] Bradbury J., 2018, JAX: Composable transformations of Python NumPy programs