Project-Level Encoding for Neural Source Code Summarization of Subroutines

被引:26
作者
Bansal, Aakash [1 ]
Haque, Sakib [1 ]
McMillan, Collin [1 ]
机构
[1] Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN 46556 USA
来源
2021 IEEE/ACM 29TH INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2021) | 2021年
关键词
source code summarization; automatic documentation generation; neural networks; PROGRAM COMPREHENSION;
D O I
10.1109/ICPC52881.2021.00032
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Source code summarization of a subroutine is the task of writing a short, natural language description of that subroutine. The description usually serves in documentation aimed at programmers, where even brief phrase (e.g. "compresses data to a zip file") can help readers rapidly comprehend what a subroutine does without resorting to reading the code itself. Techniques based on neural networks (and encoder-decoder model designs in particular) have established themselves as the state-of-the-art. Yet a problem widely recognized with these models is that they assume the information needed to create a summary is present within the code being summarized itself - an assumption which is at odds with program comprehension literature. Thus a current research frontier lies in the question of encoding source code context into neural models of summarization. In this paper, we present a project-level encoder to improve models of code summarization. By project-level, we mean that we create a vectorized representation of selected code files in a software project, and use that representation to augment the encoder of state-of-the-art neural code summarization techniques. We demonstrate how our encoder improves several existing models, and provide guidelines for maximizing improvement while controlling time and resource costs in model size.
引用
收藏
页码:253 / 264
页数:12
相关论文
共 61 条
[1]  
Ahmad W.U., 2020, P 58 ANN M ASS COMPU
[2]   The Adverse Effects of Code Duplication in Machine Learning Models of Code [J].
Allamams, Miltiadis .
PROCEEDINGS OF THE 2019 ACM SIGPLAN INTERNATIONAL SYMPOSIUM ON NEW IDEAS, NEW PARADIGMS, AND REFLECTIONS ON PROGRAMMING AND SOFTWARE (ONWARD!' 19), 2019, :143-153
[3]  
Allamanis M., 2018, INT C LEARNING REPRE
[4]   A Survey of Machine Learning for Big Code and Naturalness [J].
Allamanis, Miltiadis ;
Barr, Earl T. ;
Devanbu, Premkumar ;
Sutton, Charles .
ACM COMPUTING SURVEYS, 2018, 51 (04)
[5]  
Alon U., 2019, ICLR
[6]   code2vec: Learning Distributed Representations of Code [J].
Alon, Uri ;
Zilberstein, Meital ;
Levy, Omer ;
Yahav, Eran .
PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2019, 3 (POPL)
[7]  
AndrewForward Timothy C., 2002, Proceedings of the Symposium on Document Engineering, DocEng '02, P26, DOI [DOI 10.1145/585058.585065, 10.1145/585058.585065]
[8]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[9]  
BIGGERSTAFF TJ, 1993, PROC INT CONF SOFTW, P482, DOI 10.1109/ICSE.1993.346017
[10]   How to evaluate machine translation: A review of automated and human metrics [J].
Chatzikoumi, Eirini .
NATURAL LANGUAGE ENGINEERING, 2020, 26 (02) :137-161