Evaluating noise elimination techniques for software quality estimation

被引:2
作者
Khoshgoftaar, Taghi M. [1 ]
Rebours, Pierre [1 ]
机构
[1] Florida Atlantic Univ, Dept Comp Sci & Engn, Empirical Software Engn Lab, Boca Raton, FL 33431 USA
关键词
D O I
10.3233/IDA-2005-9506
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The poor quality of a training dataset can have untoward consequences in software quality estimation problems. The presence of noise in software measurement data may hinder the prediction accuracy of a given learner. A filter improves the quality of training datasets by removing data that is likely noise. We evaluate the Ensemble Filter against the Partitioning Filter and the Classification Filter. These filtering techniques combine the predictions of base classifiers in such a way that an instance is identified as noisy if it is misclassified by a given number of these learners. The Partitioning Filter first splits the training dataset into subsets, and different base learners are induced on each subset. Two different implementations of the Partitioning Filter are presented: the Multiple-Partitioning Filter and the Iterative-Partitioning Filter. In contrast, the Ensemble Filter uses base classifiers induced on the entire training dataset. The filtering level and/or the number of iterations modify the filtering conservativeness: a conservative filter is less likely to remove good data at the expense of retaining noisy instances. A unique measure for comparing the relative efficiencies of two filters is also presented. Empirical studies on a high assurance software project evaluate the relative performances of the Ensemble Filter, Multiple-Partitioning Filter, Iterative-Partitioning Filter, and Classification Filter. Our study demonstrates that with a conservative filtering approach, using several different base learners can improve the efficiency of the filtering schemes.
引用
收藏
页码:487 / 508
页数:22
相关论文
共 50 条
[21]   Simulation of Partial Discharges and Implementation of Noise Elimination Techniques [J].
Rajendran, Arunjothi ;
Meena, K. P. ;
Burjupati, Nageshwar Rao .
2017 3RD INTERNATIONAL CONFERENCE ON CONDITION ASSESSMENT TECHNIQUES IN ELECTRICAL SYSTEMS (CATCON), 2017, :412-417
[22]   Evaluating the Quality of Open Source Software [J].
Spinellis, Diomidis ;
Gousios, Georgios ;
Karakoidas, Vassilios ;
Louridas, Panagiotis ;
Adams, Paul J. ;
Samoladas, Ioannis ;
Stamelos, Ioannis .
ELECTRONIC NOTES IN THEORETICAL COMPUTER SCIENCE, 2009, 233 (0C) :5-28
[23]   Evaluating the Quality of Datasets in Software Engineering [J].
Rosli, Marshima Mohd ;
Tempero, Ewan ;
Luxton-Reilly, Andrew .
ADVANCED SCIENCE LETTERS, 2018, 24 (10) :7232-7239
[24]   Evaluating the Quality of Drupal Software Modules [J].
Denham, Benjamin ;
Pears, Russel ;
Connor, Andy M. .
INTERNATIONAL JOURNAL OF SOFTWARE ENGINEERING AND KNOWLEDGE ENGINEERING, 2018, 28 (05) :663-700
[25]   Evaluating Predictive Models of Software Quality [J].
Ciaschini, V. ;
Canaparo, M. ;
Ronchieri, E. ;
Salomoni, D. .
20TH INTERNATIONAL CONFERENCE ON COMPUTING IN HIGH ENERGY AND NUCLEAR PHYSICS (CHEP2013), PARTS 1-6, 2014, 513
[26]   On the value of outlier elimination on software effort estimation research [J].
Yeong-Seok Seo ;
Doo-Hwan Bae .
Empirical Software Engineering, 2013, 18 :659-698
[27]   On the value of outlier elimination on software effort estimation research [J].
Seo, Yeong-Seok ;
Bae, Doo-Hwan .
EMPIRICAL SOFTWARE ENGINEERING, 2013, 18 (04) :659-698
[28]   EVALUATING VIDEO QUALITY WITH TEMPORAL NOISE [J].
Zhao, Yin ;
Yu, Lu .
2010 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME 2010), 2010, :708-712
[29]   Evaluating ensemble imputation in software effort estimation [J].
Abnane, Ibtissam ;
Idri, Ali ;
Chlioui, Imane ;
Abran, Alain .
EMPIRICAL SOFTWARE ENGINEERING, 2023, 28 (02)
[30]   Evaluating ensemble imputation in software effort estimation [J].
Ibtissam Abnane ;
Ali Idri ;
Imane Chlioui ;
Alain Abran .
Empirical Software Engineering, 2023, 28