Investigating the Effects of Balanced Training and Testing Datasets on Effort-Aware Fault Prediction Models

Kwabena Ebo Bennin, Jacky Keung, Akito Monden, Yasutaka Kamei, Naoyasu Ubayashi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

33 Citations (Scopus)

Abstract

To prioritize software quality assurance efforts, faultprediction models have been proposed to distinguish faulty modules from clean modules. The performances of such models are often biased due to the skewness or class imbalance of the datasets considered. To improve the prediction performance of these models, sampling techniques have been employed to rebalance the distribution of fault-prone and non-fault-prone modules. The effect of these techniques have been evaluated in terms of accuracy/geometric mean/F1-measure in previous studies, however, these measures do not consider the effort needed to fixfaults. To empirically investigate the effect of sampling techniqueson the performance of software fault prediction models in a morerealistic setting, this study employs Norm(Popt), an effort-awaremeasure that considers the testing effort. We performed two setsof experiments aimed at (1) assessing the effects of samplingtechniques on effort-aware models and finding the appropriateclass distribution for training datasets (2) investigating the roleof balanced training and testing datasets on performance ofpredictive models. Of the four sampling techniques applied, the over-sampling techniques outperformed the under-samplingtechniques with Random Over-sampling performing best withrespect to the Norm (Popt) evaluation measure. Also, performanceof all the prediction models improved when sampling techniqueswere applied between the rates of (20-30)% on the trainingdatasets implying that a strictly balanced dataset (50% faultymodules and 50% clean modules) does not result in the bestperformance for effort-aware models. Our results also indicatethat performances of effort-aware models are significantly dependenton the proportions of the two types of the classes in thetesting dataset. Models trained on moderately balanced datasetsare more likely to withstand fluctuations in performance as theclass distribution in the testing data varies.

Original languageEnglish
Title of host publicationProceedings - 2016 IEEE 40th Annual Computer Software and Applications Conference, COMPSAC 2016
EditorsWilliam Claycomb, Dejan Milojicic, Ling Liu, Mihhail Matskin, Zhiyong Zhang, Sorel Reisman, Hiroyuki Sato, Zhiyong Zhang, Sheikh Iqbal Ahamed
PublisherIEEE Computer Society
Pages154-163
Number of pages10
ISBN (Electronic)9781467388450
DOIs
Publication statusPublished - Aug 24 2016
Event2016 IEEE 40th Annual Computer Software and Applications Conference, COMPSAC 2016 - Atlanta, United States
Duration: Jun 10 2016Jun 14 2016

Publication series

NameProceedings - International Computer Software and Applications Conference
Volume1
ISSN (Print)0730-3157

Other

Other2016 IEEE 40th Annual Computer Software and Applications Conference, COMPSAC 2016
Country/TerritoryUnited States
CityAtlanta
Period6/10/166/14/16

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Investigating the Effects of Balanced Training and Testing Datasets on Effort-Aware Fault Prediction Models'. Together they form a unique fingerprint.

Cite this