Skip to main content

Write Unit Tests? Start deleting them

A recent blog post by Steve Klabnik concluded with a statement about tossing unit tests if you have end to end tests covering the code in question.

Don't be afraid to change the tests! As soon as you've verified that you've transcribed the code correctly, don't be afraid to just nuke things and start again. Especially if you have integration level tests that confirm that your features actually work, your unit tests are expendable. If they're not useful, kill them!

A few people on Twitter found this odd, and I'd have included myself in this statement a while back.

Kent Beck's TDD screencasts changed my view on deleting unit tests however. During the later videos, he actually deleted some tests. Pretty much all TDD resources don't really mention this. One of the key points beginners learn is that if you break any tests, you've introduced a regression. This is not always the case. If you follow the rule of never deleting ANY tests you encounter you are going to be stuck with someone else's implementation forever. Likewise unit tests are there to drive design, not enforce how something works. I remember discussing deleting unit tests with my work colleagues and finding Kent's videos pretty shocking at the time. I mean deleting unit tests!?

The more I do TDD, the less this statement becomes so jarring. For example.

Consider a test for the above behavior, such as we get the result back in a particular state. Pretend the logic is rather simple, and it does not warrant a separate object. Any other developer should be free to come along and change the internals of this method. As long as we get a result back in the correct state, the test should be valid. The test should not care that we are using strings, lists or whatever internally.

Occasionally I find tests like this hard to pass. In other words, I feel like the logic is correct yet the test fails. Maybe I'm using a new language feature, or a language feature that seems to be not working as I expected. If so I'll break out a new unit test that tests the implementation. Such tests are often refereed to as learning tests. Here with a smaller focus I often become aware of what I'm doing wrong. Following Kent Becks example, I ditch the test after and move on.

I feel this sums up my feelings nicely.

I and others are not saying bin every unit test you have that is covered by end to end tests. Unit tests are great, you can run hundreds in a matter of seconds. They have their place as part of the development process, but do not find yourself working against them. However I am saying you should delete any test which relies on implementation details. I am saying bin any test which does not make sense. I am also saying bin tests as part of a refactoring session as long as you have test coverage higher up. If you don't have test coverage such as acceptance tests, you cannot be sure you have not broke anything after the refactor.

Comments

Popular posts from this blog

Constant Object Anti Pattern

Most constants are used to remove magic numbers or variables that lack context. A classic example would be code littered with the number 7. What does this refer to exactly? If this was replaced with DaysInWeek or similar, much clarity is provided. You can determine that code performing offsets would be adding days, rather than a mysterious number seven.Sadly a common pattern which uses constants is the use of a single constant file or object. The beauty of constants is clarity, and the obvious fact such variables are fixed. When a constant container is used, constants are simply lumped together. These can grow in size and often become a dumping ground for all values within the application.A disadvantage of this pattern is the actual value is hidden. While a friendly variable name is great, there will come a time where you will want to know the actual value. This forces you to navigate, if only to peek at the value within the constant object. A solution is to simple perform a refactor …

Three Steps to Code Quality via TDD

Common complaints and problems that I've both encountered and hear other developers raise when it comes to the practice of Test Driven Development are: Impossible to refactor without all the tests breakingMinor changes require hours of changes to test codeTest setup is huge, slow to write and difficult to understandThe use of test doubles (mocks, stubs and fakes is confusing)Over the next three posts I will demonstrate three easy steps that can resolve the problems above. In turn this will allow developers to gain one of the benefits that TDD promises - the ability to refactor your code mercifully in order to improve code quality.StepsStop Making Everything PublicLimit the Amount of Dependencies you Use A Unit is Not Always a Method or ClassCode quality is a tricky subject and highly subjective, however if you follow the three guidelines above you should have the ability to radically change implementation details and therefore improve code quality when needed.

DRY vs DAMP in Tests

In the previous post I mentioned that duplication in tests is not always bad. Sometimes duplication becomes a problem. Tests can become large or virtually identically excluding a few lines. Changes to these tests can take a while and increase the maintenance overhead. At this point, DRY violations need to be resolved.SolutionsTest HelpersA common solution is to extract common functionality into setup methods or other helper utilities. While this will remove and reduce duplication this can make tests a bit harder to read as the test is now split amongst unrelated components. There is a limit to how useful such extractions can help as each test may need to do something slightly differently.DAMP - Descriptive and Meaningful PhrasesDescriptive and Meaningful Phrases is the alter ego of DRY. DAMP tests often use the builder pattern to construct the System Under Test. This allows calls to be chained in a fluent API style, similar to the Page Object Pattern. Internally the implementation wil…