Logo Kérwá
 

An empirical evaluation of NASA-MDP data sets using a genetic defect-proneness prediction framework

Loading...
Thumbnail Image

Authors

Murillo Morera, Juan
Quesada López, Christian Ulises
Castro Herrera, Carlos
Jenkins Coronas, Marcelo

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

In software engineering, software quality is an important research area. Automated generation of learning schemes plays an important role and represents an efficient way to detect defects in software projects, thus avoiding high costs and long delivery times. This study carries out an empirical evaluation to validate two versions with different levels of noise of NASAMDP data sets. The main objective of this paper is to determine the stability of our framework. In all, 864 learning schemes were studied (8 data preprocessors x 6 attribute selectors x 18 learning algorithms). In line with statistical tests, our framework reported stable results between the analyzed versions. Results reported that evaluation and prediction phases were similar. Furthermore, the performance of the phases of evaluation and prediction between versions of data sets were stable. This means that the differences between versions did not affect the performance of our framework

Description

Keywords

Prediction models, Learning schemes, Software metrics, Software metrics, Statistical analysis, Empirical procedure

Citation

http://ieeexplore.ieee.org/document/7942359/

Endorsement

Review

Supplemented By

Referenced By