EFindBugs: Effective error ranking for FindBugs

Haihao Shen, Jianhong Fang, Jianjun Zhao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

24 Citations (Scopus)

Abstract

Static analysis tools have been widely used to detect potential defects without executing programs. It helps programmers raise the awareness about subtle correctness issues in the early stage. However, static defect detection tools face the high false positive rate problem. Therefore, programmers have to spend a considerable amount of time on screening out real bugs from a large number of reported warnings, which is time-consuming and inefficient. To alleviate the above problem during the report inspection process, we present EFindBugs to employ an effective two-stage error ranking strategy that suppresses the false positives and ranks the true error reports on top, so that real bugs existing in the programs could be more easily found and fixed by the programmers. In the first stage, EFindBugs initializes the ranking by assigning predefined defect likelihood for each bug pattern and sorting the error reports by the defect likelihood in descending order. In the second stage, EFindbugs optimizes the initial ranking self-adaptively through the feedback from users. This optimization process is executed automatically and based on the correlations among error reports with the same bug pattern. Our experiment on three widely-used Java projects (AspectJ, Tomcat, and Axis) shows that our ranking strategy outperforms the original ranking in Find Bugs in terms of precision, recall and F1-score.

Original languageEnglish
Title of host publicationProceedings - 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011
Pages299-308
Number of pages10
DOIs
Publication statusPublished - 2011
Externally publishedYes
Event4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011 - Berlin, Germany
Duration: Mar 21 2011Mar 25 2011

Other

Other4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011
CountryGermany
CityBerlin
Period3/21/113/25/11

Fingerprint

Defects
Static analysis
Sorting
Screening
Inspection
Feedback
Experiments
Defect detection

All Science Journal Classification (ASJC) codes

  • Software

Cite this

Shen, H., Fang, J., & Zhao, J. (2011). EFindBugs: Effective error ranking for FindBugs. In Proceedings - 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011 (pp. 299-308). [5770619] https://doi.org/10.1109/ICST.2011.51

EFindBugs : Effective error ranking for FindBugs. / Shen, Haihao; Fang, Jianhong; Zhao, Jianjun.

Proceedings - 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011. 2011. p. 299-308 5770619.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Shen, H, Fang, J & Zhao, J 2011, EFindBugs: Effective error ranking for FindBugs. in Proceedings - 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011., 5770619, pp. 299-308, 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011, Berlin, Germany, 3/21/11. https://doi.org/10.1109/ICST.2011.51
Shen H, Fang J, Zhao J. EFindBugs: Effective error ranking for FindBugs. In Proceedings - 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011. 2011. p. 299-308. 5770619 https://doi.org/10.1109/ICST.2011.51
Shen, Haihao ; Fang, Jianhong ; Zhao, Jianjun. / EFindBugs : Effective error ranking for FindBugs. Proceedings - 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011. 2011. pp. 299-308
@inproceedings{3524c6452d11440d952e4e450996b9d4,
title = "EFindBugs: Effective error ranking for FindBugs",
abstract = "Static analysis tools have been widely used to detect potential defects without executing programs. It helps programmers raise the awareness about subtle correctness issues in the early stage. However, static defect detection tools face the high false positive rate problem. Therefore, programmers have to spend a considerable amount of time on screening out real bugs from a large number of reported warnings, which is time-consuming and inefficient. To alleviate the above problem during the report inspection process, we present EFindBugs to employ an effective two-stage error ranking strategy that suppresses the false positives and ranks the true error reports on top, so that real bugs existing in the programs could be more easily found and fixed by the programmers. In the first stage, EFindBugs initializes the ranking by assigning predefined defect likelihood for each bug pattern and sorting the error reports by the defect likelihood in descending order. In the second stage, EFindbugs optimizes the initial ranking self-adaptively through the feedback from users. This optimization process is executed automatically and based on the correlations among error reports with the same bug pattern. Our experiment on three widely-used Java projects (AspectJ, Tomcat, and Axis) shows that our ranking strategy outperforms the original ranking in Find Bugs in terms of precision, recall and F1-score.",
author = "Haihao Shen and Jianhong Fang and Jianjun Zhao",
year = "2011",
doi = "10.1109/ICST.2011.51",
language = "English",
isbn = "9780769543420",
pages = "299--308",
booktitle = "Proceedings - 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011",

}

TY - GEN

T1 - EFindBugs

T2 - Effective error ranking for FindBugs

AU - Shen, Haihao

AU - Fang, Jianhong

AU - Zhao, Jianjun

PY - 2011

Y1 - 2011

N2 - Static analysis tools have been widely used to detect potential defects without executing programs. It helps programmers raise the awareness about subtle correctness issues in the early stage. However, static defect detection tools face the high false positive rate problem. Therefore, programmers have to spend a considerable amount of time on screening out real bugs from a large number of reported warnings, which is time-consuming and inefficient. To alleviate the above problem during the report inspection process, we present EFindBugs to employ an effective two-stage error ranking strategy that suppresses the false positives and ranks the true error reports on top, so that real bugs existing in the programs could be more easily found and fixed by the programmers. In the first stage, EFindBugs initializes the ranking by assigning predefined defect likelihood for each bug pattern and sorting the error reports by the defect likelihood in descending order. In the second stage, EFindbugs optimizes the initial ranking self-adaptively through the feedback from users. This optimization process is executed automatically and based on the correlations among error reports with the same bug pattern. Our experiment on three widely-used Java projects (AspectJ, Tomcat, and Axis) shows that our ranking strategy outperforms the original ranking in Find Bugs in terms of precision, recall and F1-score.

AB - Static analysis tools have been widely used to detect potential defects without executing programs. It helps programmers raise the awareness about subtle correctness issues in the early stage. However, static defect detection tools face the high false positive rate problem. Therefore, programmers have to spend a considerable amount of time on screening out real bugs from a large number of reported warnings, which is time-consuming and inefficient. To alleviate the above problem during the report inspection process, we present EFindBugs to employ an effective two-stage error ranking strategy that suppresses the false positives and ranks the true error reports on top, so that real bugs existing in the programs could be more easily found and fixed by the programmers. In the first stage, EFindBugs initializes the ranking by assigning predefined defect likelihood for each bug pattern and sorting the error reports by the defect likelihood in descending order. In the second stage, EFindbugs optimizes the initial ranking self-adaptively through the feedback from users. This optimization process is executed automatically and based on the correlations among error reports with the same bug pattern. Our experiment on three widely-used Java projects (AspectJ, Tomcat, and Axis) shows that our ranking strategy outperforms the original ranking in Find Bugs in terms of precision, recall and F1-score.

UR - http://www.scopus.com/inward/record.url?scp=79958719774&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79958719774&partnerID=8YFLogxK

U2 - 10.1109/ICST.2011.51

DO - 10.1109/ICST.2011.51

M3 - Conference contribution

AN - SCOPUS:79958719774

SN - 9780769543420

SP - 299

EP - 308

BT - Proceedings - 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011

ER -