Show simple item record

dc.creatorMurillo Morera, Juan
dc.creatorQuesada López, Christian Ulises
dc.creatorCastro Herrera, Carlos
dc.creatorJenkins Coronas, Marcelo
dc.date.accessioned2018-01-22T19:09:17Z
dc.date.available2018-01-22T19:09:17Z
dc.date.issued2016-04
dc.identifier.urihttp://hdl.handle.net/10669/73965
dc.description.abstractThe complexity of providing accurate software effort prediction models is well known in the software industry. Several prediction models have been proposed in the literature using different techniques, with different results, in different contexts. Objectives: This paper reports a benchmarking study using a genetic approach that automatically generates and compares different learning schemes (preprocessing+attribute selection+learning algorithms). The effectiveness of the software development effort prediction models (using function points) were validated using the ISBSG R12 dataset. Methods: Eight subsets of projects were analyzed running a M×N-fold cross-validation. We used a genetic approach to automatically select the components of the learning schemes, to evaluate, and to report the learning scheme with the best performance. Results: In total, 150 learning schemes were studied (2 data preprocessors, 5 attribute selectors, and 15 modeling techniques). The most common learning schemes were: Log+ForwardSelection+M5-Rules, Log+BestFirst+M5-Rules, Log+LinearForwardSelection+SMOreg, ForwardSelection+SMOreg and ForwardSelection+ SMOreg, BackwardElimination+SMOreg, LinearForwardSelection+SMOreg, and Log+Best First+SMOreg. Conclusions: The results show that we should select a different learning schemes for each datasets. Our results support previous findings regarding that the setup applied in evaluations can completely reverse findings. A genetic approach that automatically selects best combination based on a specific dataset could improve the performance of software effort prediction models.es_ES
dc.description.sponsorshipThe International Software Benchmarking Standards Group/[]/ISBSG/Australiaes_ES
dc.description.sponsorshipUniversidad de Costa Rica/[834-B5-A18]/UCR/Costa Ricaes_ES
dc.description.sponsorshipUniversidad Nacional de Costa Rica/[]/UNA/Costa Ricaes_ES
dc.description.sponsorshipMinisterio de Ciencia, Tecnología y Telecomunicaciones/[]/MICITT/Costa Ricaes_ES
dc.language.isoen_USes_ES
dc.sourceCIBSE 2016 - XIX Ibero-American Conference on Software Engineering. (pp.185-199).Ecuadores_ES
dc.subjectEffort prediction modeles_ES
dc.subjectLearning schemeses_ES
dc.subjectGenetic approaches_ES
dc.subjectExperimentes_ES
dc.subjectISBSG datasetes_ES
dc.subjectFunction pointses_ES
dc.titleAn Empirical Validation of an Automated Genetic Software Effort Prediction Framework Using the ISBSG Datasetes_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.typeObjeto de conferenciaes_ES
dc.description.procedenceUCR::Vicerrectoría de Investigación::Unidades de Investigación::Ingeniería::Centro de Investigaciones en Tecnologías de Información y Comunicación (CITIC)es_ES
dc.identifier.codproyecto834-B5-A18


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record