An empirical study on bug severity estimation using source code metrics and static analysis

被引:0
作者
Mashhadi, Ehsan [1 ]
Chowdhury, Shaiful [2 ]
Modaberi, Somayeh [1 ]
Hemmati, Hadi [1 ,3 ]
Uddin, Gias [1 ]
机构
[1] Univ Calgary, Calgary, AB, Canada
[2] Univ Manitoba, Winnipeg, MB, Canada
[3] York Univ, Toronto, ON, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Bug severity; Defect prediction; Code complexity metrics; Static analysis tools; SOFTWARE; COMPLEXITY; MAINTENANCE; PREDICTION; SMELLS;
D O I
10.1016/j.jss.2024.112179
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In the past couple of decades, significant research efforts have been devoted to the prediction of software bugs (i.e., defects). In general, these works leverage a diverse set of metrics, tools, and techniques to predict which classes, methods, lines, or commits are buggy. However, most existing work in this domain treats all bugs the same, which is not the case in practice. The more severe the bugs the higher their consequences. Therefore, it is important for a defect prediction method to estimate the severity of the identified bugs, so that the higher severity ones get immediate attention. In this paper, we provide a quantitative and qualitative study on two popular datasets (Defects4J and Bugs.jar), using 10 common source code metrics, and two popular static analysis tools (SpotBugs and Infer) for analyzing their capability to predict defects and their severity. We studied 3,358 buggy methods with different severity labels from 19 Java open-source projects. Results show that although code metrics are useful in predicting buggy code (Lines of the Code, Maintainable Index, FanOut, and Effort metrics are the best), they cannot estimate the severity level of the bugs. In addition, we observed that static analysis tools have weak performance in both predicting bugs (F1 score range of 3.1%-7.1%) and their severity label (F1 score under 2%). We also manually studied the characteristics of the severe bugs to identify possible reasons behind the weak performance of code metrics and static analysis tools in estimating their severity. Also, our categorization shows that Security bugs have high severity in most cases while Edge/Boundary faults have low severity. Finally, we discuss the practical implications of the results and propose new directions for future research.
引用
收藏
页数:23
相关论文
共 131 条
  • [41] Hess M. R., 2004, ANN M AM ED RES ASS, P1
  • [42] Reading beside the lines: Indentation as a proxy for complexity metrics
    Hindle, Abram
    Godfrey, Michael W.
    Holt, Richard C.
    [J]. PROCEEDINGS OF THE 16TH IEEE INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION, 2008, : 133 - 142
  • [43] A Systematic Literature Review and Meta-analysis on Cross Project Defect Prediction
    Hosseini, Seyedrebvar
    Turhan, Burak
    Gunarathna, Dimuthu
    [J]. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2019, 45 (02) : 111 - 147
  • [44] Infer,, 2023, Infer
  • [45] An Empirical Study Assessing Source Code Readability in Comprehension
    Johnson, John C.
    Lubo, Sergio
    Yedla, Nishitha
    Aponte, Jairo
    Sharif, Bonita
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE AND EVOLUTION (ICSME 2019), 2019, : 513 - 523
  • [46] Jureczko M., 2010, MODELS METHODS SYSTE, P69
  • [47] Just R., 2014, PROC ISSTA 14, P437
  • [48] THE USE OF SOFTWARE COMPLEXITY METRICS IN SOFTWARE MAINTENANCE
    KAFURA, D
    REDDY, GR
    [J]. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 1987, 13 (03) : 335 - 343
  • [49] Bug Prioritization to Facilitate Bug Report Triage
    Kanwal, Jaweria
    Maqbool, Onaiza
    [J]. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2012, 27 (02) : 397 - 412
  • [50] Kasto N, 2013, P 15 AUSTR COMP ED C, V136, P59