Improving Bug Localization with an Enhanced Convolutional Neural Network

被引:40
|
作者
Xiao, Yan [1 ]
Keung, Jacky [1 ]
Mi, Qing [1 ]
Bennin, Kwabena E. [1 ]
机构
[1] City Univ Hong Kong, Dept Comp Sci, Kowloon, Hong Kong, Peoples R China
关键词
bug localization; convolutional neural network; word2vec; TF-IDF; deep learning; semantic information;
D O I
10.1109/APSEC.2017.40
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Background: Localizing buggy files automatically speeds up the process of bug fixing so as to improve the efficiency and productivity of software quality teams. There are other useful semantic information available in hug reports and source code, but are mostly underutilized by existing bug localization approaches. Aims: We propose DeepLocator, a novel deep learning based model to improve the performance of hug localization by making full use of semantic information. Method: DeepLocator is composed of an enhanced CNN (Convolutional Neural Network) proposed in this study considering hug-fixing experience, together with a new rTF-IDuF method and pre trained word2vec technique. DeepLocator is then evaluated on over 18,500 bug reports extracted from AspectJ, Eclipse, JDT, SWT and Tomcat projects. Results: The experimental results show that DeepLocator achieves 9.77% to 26.65% higher F measure than the conventional CNN and 3.8% higher MAP than a state-of-the-art method HyLoc using less computation time. Conclusion: DeepLocator is capable of automatically connecting bug reports to the corresponding buggy files and successfully achieves better performance based on a deep understanding of semantics in bug reports and source code.
引用
收藏
页码:338 / 347
页数:10
相关论文
共 50 条
  • [31] A Convolutional Gated Recurrent Neural Network for Seizure Onset Localization
    Daoud, Hisham
    Bayoumi, Magdy
    2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 2572 - 2576
  • [32] Statistical Localization Exploiting Convolutional Neural Network for an Autonomous Vehicle
    Ishibushi, Satoshi
    Taniguchi, Akira
    Takano, Toshiaki
    Hagiwara, Yoshinobu
    Taniguchi, Tadahiro
    IECON 2015 - 41ST ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2015, : 1369 - 1375
  • [33] Indoor Localization With an Autoencoder-Based Convolutional Neural Network
    Arslantas, Hatice
    Okdem, Selcuk
    IEEE ACCESS, 2024, 12 : 46059 - 46066
  • [34] Automatic localization of cephalometric landmarks based on convolutional neural network
    Yao, Jie
    Zeng, Wei
    He, Tao
    Zhou, Shanluo
    Zhang, Yi
    Guo, Jixiang
    Tang, Wei
    AMERICAN JOURNAL OF ORTHODONTICS AND DENTOFACIAL ORTHOPEDICS, 2022, 161 (03) : E250 - E259
  • [35] Binaural Sound Source Localization Based on Convolutional Neural Network
    Zhou, Lin
    Ma, Kangyu
    Wang, Lijie
    Chen, Ying
    Tang, Yibin
    CMC-COMPUTERS MATERIALS & CONTINUA, 2019, 60 (02): : 545 - 557
  • [36] Source localization in the deep ocean using a convolutional neural network
    Liu, Wenxu
    Yang, Yixin
    Xu, Mengqian
    Lu, Liangang
    Liu, Zongwei
    Shi, Yang
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2020, 147 (04): : EL314 - EL319
  • [37] Improving generalization of convolutional neural network through contrastive augmentation
    Li, Xiaosong
    Wu, Yanxia
    Tang, Chuheng
    Fu, Yan
    Zhang, Lidan
    KNOWLEDGE-BASED SYSTEMS, 2023, 272
  • [38] Improving pedestrian detection using light convolutional neural network
    Errami, Mounir
    Rziza, Mohammed
    9TH INTERNATIONAL SYMPOSIUM ON SIGNAL, IMAGE, VIDEO AND COMMUNICATIONS (ISIVC 2018), 2018, : 71 - 75
  • [39] Improving Convolutional Neural Network Using Pseudo Derivative ReLU
    Hu, Zheng
    Li, Yongping
    Yang, Zhiyong
    2018 5TH INTERNATIONAL CONFERENCE ON SYSTEMS AND INFORMATICS (ICSAI), 2018, : 283 - 287
  • [40] A Deep Convolutional Neural Network Model for Improving WRF Simulations
    Sayeed, Alqamah
    Choi, Yunsoo
    Jung, Jia
    Lops, Yannic
    Eslami, Ebrahim
    Salman, Ahmed Khan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (02) : 750 - 760