TY - JOUR
T1 - An empirical study on the effects of code visibility on program testability
AU - Ma, Lei
AU - Zhang, Cheng
AU - Yu, Bing
AU - Sato, Hiroyuki
N1 - Funding Information:
We would like to thank René Just for sharing Defects4J and suggestions on its usage. We thank Michael Ernst and Sai Zhang for the discussion on Randoop. We thank Qingzhou Luo and Cyrille Artho for the discussion on JPF Symbolic PathFinder. We also thank Gordon Fraser and José Campos for the help on the configuration of running EvoSuite. This work was supported by the Fundamental Research Funds for the Central Universities AUGA5710000816, and the National High-tech R&D Program of China (863 Program) 2015AA020101, 2015AA020108.
Publisher Copyright:
© 2016, Springer Science+Business Media New York.
PY - 2017/9/1
Y1 - 2017/9/1
N2 - Software testability represents the degree of ease with which a software artifact supports testing. When it is easy to detect defects in a program through testing, the program has high testability; otherwise, the testability of the program is low. As an abstract property of programs, testability can be measured by various metrics, which are affected by different factors of design and implementation. In object-oriented software development, code visibility is important to support design principles, such as information hiding. It is widely believed that code visibility has some effects on testability. However, little empirical evidence has been shown to clarify whether and how software testability is influenced by code visibility. We have performed a comprehensive empirical study to shed light on this problem. We first use code coverage as a concrete proxy for testability. We select 27 real-world software programs as subjects and ran two state-of-the-art automated testing tools, Randoop and EvoSuite, on these programs to analyze their code coverage, in comparison with that of developer-written tests. The results show that code visibility does not necessarily have effects on code coverage, but can significantly affect automated tools. Developer-written tests achieve similar coverage on code areas with different visibility, while low code visibility often leads to low code coverage for automated tools. In addition, we have developed two enhanced variants of Randoop that implement multiple strategies to handle code visibility. The results on Randoop variants show that different treatments on code visibility can result in significant differences in code coverage for automated tools. In the second part, our study uses fault detection rate as another concrete measurement of testability. We apply the automated testing tools on 357 real faults. The result of our in-depth analysis is consistent with that of the first part, demonstrating the significant effects of code visibility on program testability.
AB - Software testability represents the degree of ease with which a software artifact supports testing. When it is easy to detect defects in a program through testing, the program has high testability; otherwise, the testability of the program is low. As an abstract property of programs, testability can be measured by various metrics, which are affected by different factors of design and implementation. In object-oriented software development, code visibility is important to support design principles, such as information hiding. It is widely believed that code visibility has some effects on testability. However, little empirical evidence has been shown to clarify whether and how software testability is influenced by code visibility. We have performed a comprehensive empirical study to shed light on this problem. We first use code coverage as a concrete proxy for testability. We select 27 real-world software programs as subjects and ran two state-of-the-art automated testing tools, Randoop and EvoSuite, on these programs to analyze their code coverage, in comparison with that of developer-written tests. The results show that code visibility does not necessarily have effects on code coverage, but can significantly affect automated tools. Developer-written tests achieve similar coverage on code areas with different visibility, while low code visibility often leads to low code coverage for automated tools. In addition, we have developed two enhanced variants of Randoop that implement multiple strategies to handle code visibility. The results on Randoop variants show that different treatments on code visibility can result in significant differences in code coverage for automated tools. In the second part, our study uses fault detection rate as another concrete measurement of testability. We apply the automated testing tools on 357 real faults. The result of our in-depth analysis is consistent with that of the first part, demonstrating the significant effects of code visibility on program testability.
UR - http://www.scopus.com/inward/record.url?scp=84991396299&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84991396299&partnerID=8YFLogxK
U2 - 10.1007/s11219-016-9340-8
DO - 10.1007/s11219-016-9340-8
M3 - Article
AN - SCOPUS:84991396299
SN - 0963-9314
VL - 25
SP - 951
EP - 978
JO - Software Quality Journal
JF - Software Quality Journal
IS - 3
ER -