Software defect prediction: do different classifiers find the same defects?

Cascading classifiers Confusion matrix Predictive modelling Majority Rule Software bug
DOI: 10.1007/s11219-016-9353-3 Publication Date: 2017-02-07T01:33:05Z
ABSTRACT
During the last 10 years, hundreds of different defect prediction models have been published. The performance classifiers used in these is reported to be similar with rarely performing above predictive ceiling about 80% recall. We investigate individual defects that four predict and analyse level uncertainty produced by classifiers. perform a sensitivity analysis compare Random Forest, Naïve Bayes, RPart SVM when predicting NASA, open source commercial datasets. predictions each classifier makes captured confusion matrix compared. Despite values for classifiers, detects sets defects. Some are more consistent than others. Our results confirm unique subset can detected specific However, while some they make, other vary their predictions. Given our results, we conclude ensembles decision-making strategies not based on majority voting likely best prediction.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (71)
CITATIONS (143)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....