An empirical study on bug severity estimation using source code metrics and static analysis

被引:0
作者
Mashhadi, Ehsan [1 ]
Chowdhury, Shaiful [2 ]
Modaberi, Somayeh [1 ]
Hemmati, Hadi [1 ,3 ]
Uddin, Gias [1 ]
机构
[1] Univ Calgary, Calgary, AB, Canada
[2] Univ Manitoba, Winnipeg, MB, Canada
[3] York Univ, Toronto, ON, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Bug severity; Defect prediction; Code complexity metrics; Static analysis tools; SOFTWARE; COMPLEXITY; MAINTENANCE; PREDICTION; SMELLS;
D O I
10.1016/j.jss.2024.112179
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In the past couple of decades, significant research efforts have been devoted to the prediction of software bugs (i.e., defects). In general, these works leverage a diverse set of metrics, tools, and techniques to predict which classes, methods, lines, or commits are buggy. However, most existing work in this domain treats all bugs the same, which is not the case in practice. The more severe the bugs the higher their consequences. Therefore, it is important for a defect prediction method to estimate the severity of the identified bugs, so that the higher severity ones get immediate attention. In this paper, we provide a quantitative and qualitative study on two popular datasets (Defects4J and Bugs.jar), using 10 common source code metrics, and two popular static analysis tools (SpotBugs and Infer) for analyzing their capability to predict defects and their severity. We studied 3,358 buggy methods with different severity labels from 19 Java open-source projects. Results show that although code metrics are useful in predicting buggy code (Lines of the Code, Maintainable Index, FanOut, and Effort metrics are the best), they cannot estimate the severity level of the bugs. In addition, we observed that static analysis tools have weak performance in both predicting bugs (F1 score range of 3.1%-7.1%) and their severity label (F1 score under 2%). We also manually studied the characteristics of the severe bugs to identify possible reasons behind the weak performance of code metrics and static analysis tools in estimating their severity. Also, our categorization shows that Security bugs have high severity in most cases while Edge/Boundary faults have low severity. Finally, we discuss the practical implications of the results and propose new directions for future research.
引用
收藏
页数:23
相关论文
共 131 条
  • [31] Giger E, 2012, INT SYMP EMP SOFTWAR, P171, DOI 10.1145/2372251.2372285
  • [32] On the correlation between size and metric validity
    Gil, Yossi
    Lalouche, Gal
    [J]. EMPIRICAL SOFTWARE ENGINEERING, 2017, 22 (05) : 2585 - 2611
  • [33] CodeShovel: Constructing Method-Level Source Code Histories
    Grund, Felix
    Chowdhury, Shaiful
    Bradley, Nick C.
    Hall, Braxton
    Holmes, Reid
    [J]. 2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2021), 2021, : 1510 - 1522
  • [34] Habib A, 2019, Arxiv, DOI arXiv:1906.00307
  • [35] How Many of All Bugs Do We Find? A Study of Static Bug Detectors
    Habib, Andrew
    Pradel, Michael
    [J]. PROCEEDINGS OF THE 2018 33RD IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMTED SOFTWARE ENGINEERING (ASE' 18), 2018, : 317 - 328
  • [36] A Systematic Literature Review on Fault Prediction Performance in Software Engineering
    Hall, Tracy
    Beecham, Sarah
    Bowes, David
    Gray, David
    Counsell, Steve
    [J]. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2012, 38 (06) : 1276 - 1304
  • [37] Halstead M., 1977, ELEMENTS SOFTWARE SC
  • [38] Hata H, 2012, PROC INT CONF SOFTW, P200, DOI 10.1109/ICSE.2012.6227193
  • [39] How Effective Are Code Coverage Criteria?
    Hemmati, Hadi
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE SECURITY AND RELIABILITY (QRS 2015), 2015, : 151 - 156
  • [40] Herzig K, 2013, IEEE WORK CONF MIN S, P121, DOI 10.1109/MSR.2013.6624018