Refactoring as a process is aimed at improving the quality of a software system while preserving its external behavior. In practice, refactoring comes in the form of many specific and diverse refactoring operations, which have different scopes and thus a different potential impact on both the production and the test code. We present a large-scale quantitative study complemented by a qualitative analysis involving 615,196 test cases to understand how and to what extent different refactoring operations impact a system's test suites. Our findings show that while the vast majority of refactoring operations do not or very seldom induce test breaks, some specific refactoring types (e.g., 'RENAME Attribute' and 'RENAME Class') have a higher chance of breaking test suites. Meanwhile, 'ADD Parameter' and 'CHANGE Return Type' refactoring operations often require additional lines of changes to fix the test suite they break. While some modern IDEs provide features to automatically apply these two types of refactoring operations, they are not always able to avoid test breaks, thus demanding extra human efforts.