TY - GEN
T1 - The impact of using regression models to build defect classifiers
AU - Rajbahadur, Gopi Krishnan
AU - Wang, Shaowei
AU - Kamei, Yasutaka
AU - Hassan, Ahmed E.
PY - 2017/6/29
Y1 - 2017/6/29
N2 - It is common practice to discretize continuous defect counts into defective and non-defective classes and use them as a target variable when building defect classifiers (discretized classifiers). However, this discretization of continuous defect counts leads to information loss that might affect the performance and interpretation of defect classifiers. Another possible approach to build defect classifiers is through the use of regression models then discretizing the predicted defect counts into defective and non-defective classes (regression-based classifiers). In this paper, we compare the performance and interpretation of defect classifiers that are built using both approaches (i.e., discretized classifiers and regression-based classifiers) across six commonly used machine learning classifiers (i.e., linear/logistic regression, random forest, KNN, SVM, CART, and neural networks) and 17 datasets. We find that: i) Random forest based classifiers outperform other classifiers (best AUC) for both classifier building approaches, ii) In contrast to common practice, building a defect classifier using discretized defect counts (i.e., discretized classifiers) does not always lead to better performance. Hence we suggest that future defect classification studies should consider building regression-based classifiers (in particular when the defective ratio of the modeled dataset is low). Moreover, we suggest that both approaches for building defect classifiers should be explored, so the best-performing classifier can be used when determining the most influential features.
AB - It is common practice to discretize continuous defect counts into defective and non-defective classes and use them as a target variable when building defect classifiers (discretized classifiers). However, this discretization of continuous defect counts leads to information loss that might affect the performance and interpretation of defect classifiers. Another possible approach to build defect classifiers is through the use of regression models then discretizing the predicted defect counts into defective and non-defective classes (regression-based classifiers). In this paper, we compare the performance and interpretation of defect classifiers that are built using both approaches (i.e., discretized classifiers and regression-based classifiers) across six commonly used machine learning classifiers (i.e., linear/logistic regression, random forest, KNN, SVM, CART, and neural networks) and 17 datasets. We find that: i) Random forest based classifiers outperform other classifiers (best AUC) for both classifier building approaches, ii) In contrast to common practice, building a defect classifier using discretized defect counts (i.e., discretized classifiers) does not always lead to better performance. Hence we suggest that future defect classification studies should consider building regression-based classifiers (in particular when the defective ratio of the modeled dataset is low). Moreover, we suggest that both approaches for building defect classifiers should be explored, so the best-performing classifier can be used when determining the most influential features.
UR - http://www.scopus.com/inward/record.url?scp=85026518861&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85026518861&partnerID=8YFLogxK
U2 - 10.1109/MSR.2017.4
DO - 10.1109/MSR.2017.4
M3 - Conference contribution
AN - SCOPUS:85026518861
T3 - IEEE International Working Conference on Mining Software Repositories
SP - 135
EP - 145
BT - Proceedings - 2017 IEEE/ACM 14th International Conference on Mining Software Repositories, MSR 2017
PB - IEEE Computer Society
T2 - 14th IEEE/ACM International Conference on Mining Software Repositories, MSR 2017
Y2 - 20 May 2017 through 21 May 2017
ER -