TY - GEN
T1 - An Empirical Study on Effects of Code Visibility on Code Coverage of Software Testing
AU - Ma, Lei
AU - Zhang, Cheng
AU - Yu, Bing
AU - Sato, Hiroyuki
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/7/23
Y1 - 2015/7/23
N2 - Software testability is the degree of difficulty to test a program. Code visibility is important to support design principles, such as information hiding. It is widely believed that code visibility has effects on testability. However, little empirical evidence has been shown to clarify whether and how software testability is influenced by code visibility. We have performed an empirical study to shed light on this problem. Our study focuses on test code coverage, in particular that of automatic testing tools. Code coverage is commonly used for various purposes, such as evaluating test adequacy, assessing test quality, and analyzing testability. Our study uses code coverage as the concrete measurement of testability. By analyzing code coverage of two state-of-the-art tools, in comparison with that of developer-written tests, we have discovered that code visibility does not necessarily have effects on its code coverage, but significantly affects automatic testing tools. Low code visibility often leads to low code coverage for automatic tools. In addition, different treatments on code visibility can result in significant differences in overall code coverage for automatic tools. Using a tool enhancement specific to code visibility, we demonstrate the great potential to improve existing tools.
AB - Software testability is the degree of difficulty to test a program. Code visibility is important to support design principles, such as information hiding. It is widely believed that code visibility has effects on testability. However, little empirical evidence has been shown to clarify whether and how software testability is influenced by code visibility. We have performed an empirical study to shed light on this problem. Our study focuses on test code coverage, in particular that of automatic testing tools. Code coverage is commonly used for various purposes, such as evaluating test adequacy, assessing test quality, and analyzing testability. Our study uses code coverage as the concrete measurement of testability. By analyzing code coverage of two state-of-the-art tools, in comparison with that of developer-written tests, we have discovered that code visibility does not necessarily have effects on its code coverage, but significantly affects automatic testing tools. Low code visibility often leads to low code coverage for automatic tools. In addition, different treatments on code visibility can result in significant differences in overall code coverage for automatic tools. Using a tool enhancement specific to code visibility, we demonstrate the great potential to improve existing tools.
UR - http://www.scopus.com/inward/record.url?scp=84945397072&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84945397072&partnerID=8YFLogxK
U2 - 10.1109/AST.2015.23
DO - 10.1109/AST.2015.23
M3 - Conference contribution
AN - SCOPUS:84945397072
T3 - Proceedings - 10th International Workshop on Automation of Software Test, AST 2015
SP - 80
EP - 84
BT - Proceedings - 10th International Workshop on Automation of Software Test, AST 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 10th International Workshop on Automation of Software Test, AST 2015
Y2 - 23 May 2015 through 24 May 2015
ER -