An empirical study on bug severity estimation using source code metrics and static analysis

被引:0
作者
Mashhadi, Ehsan [1 ]
Chowdhury, Shaiful [2 ]
Modaberi, Somayeh [1 ]
Hemmati, Hadi [1 ,3 ]
Uddin, Gias [1 ]
机构
[1] Univ Calgary, Calgary, AB, Canada
[2] Univ Manitoba, Winnipeg, MB, Canada
[3] York Univ, Toronto, ON, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Bug severity; Defect prediction; Code complexity metrics; Static analysis tools; SOFTWARE; COMPLEXITY; MAINTENANCE; PREDICTION; SMELLS;
D O I
10.1016/j.jss.2024.112179
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In the past couple of decades, significant research efforts have been devoted to the prediction of software bugs (i.e., defects). In general, these works leverage a diverse set of metrics, tools, and techniques to predict which classes, methods, lines, or commits are buggy. However, most existing work in this domain treats all bugs the same, which is not the case in practice. The more severe the bugs the higher their consequences. Therefore, it is important for a defect prediction method to estimate the severity of the identified bugs, so that the higher severity ones get immediate attention. In this paper, we provide a quantitative and qualitative study on two popular datasets (Defects4J and Bugs.jar), using 10 common source code metrics, and two popular static analysis tools (SpotBugs and Infer) for analyzing their capability to predict defects and their severity. We studied 3,358 buggy methods with different severity labels from 19 Java open-source projects. Results show that although code metrics are useful in predicting buggy code (Lines of the Code, Maintainable Index, FanOut, and Effort metrics are the best), they cannot estimate the severity level of the bugs. In addition, we observed that static analysis tools have weak performance in both predicting bugs (F1 score range of 3.1%-7.1%) and their severity label (F1 score under 2%). We also manually studied the characteristics of the severe bugs to identify possible reasons behind the weak performance of code metrics and static analysis tools in estimating their severity. Also, our categorization shows that Security bugs have high severity in most cases while Edge/Boundary faults have low severity. Finally, we discuss the practical implications of the results and propose new directions for future research.
引用
收藏
页数:23
相关论文
共 131 条
  • [1] Agrawal R, 2021, Int J Cognit Comput Eng., V2, P104
  • [2] Alenezi M., 2019, Int. J. Innov. Technol. Explor. Eng. (IJITEE), V9, P2737
  • [3] AlOmar EA, 2019, INT SYMP EMP SOFTWAR, P42
  • [4] What Do The Asserts in a Unit Test Tell Us About Code Quality? A Study on Open Source and Industrial Projects
    Aniche, Mauricio Finavaro
    Oliva, Gustavo Ansaldi
    Gerosa, Marco Aurelio
    [J]. PROCEEDINGS OF THE 17TH EUROPEAN CONFERENCE ON SOFTWARE MAINTENANCE AND REENGINEERING (CSMR 2013), 2013, : 111 - 120
  • [5] [Anonymous], 2011, P 8 WORKING C MINING
  • [6] Evaluating code complexity triggers, use of complexity measures and the influence of code complexity on maintenance time
    Antinyan, Vard
    Staron, Miroslaw
    Sandberg, Anna
    [J]. EMPIRICAL SOFTWARE ENGINEERING, 2017, 22 (06) : 3057 - 3087
  • [7] Antinyan V, 2014, 2014 SOFTWARE EVOLUTION WEEK - IEEE CONFERENCE ON SOFTWARE MAINTENANCE, REENGINEERING, AND REVERSE ENGINEERING (CSMR-WCRE), P154, DOI 10.1109/CSMR-WCRE.2014.6747165
  • [8] ATLASSIAN,, 2023, Jira software
  • [9] Ayewah N, 2007, COMPANION 22 ACM SIG, P805, DOI [10.1145/1297846.1297897, DOI 10.1145/1297846.1297897]
  • [10] Bacchelli A, 2013, PROCEEDINGS OF THE 35TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2013), P712, DOI 10.1109/ICSE.2013.6606617