Skip to main content

MBUnit to NUnit

Over the last few weeks we've ported our tests from MBUnit to NUnit. This was done as after a quick spike it was seen that NUnit tests run almost fifty percent quicker. For example our common projects' test time went from around 40s to around 20s on average.

This whole process was no easy task. Initially our largest project was converted by the whole team. We split into pairs/individuals and tackled a test project each. Working in this manner we could commit after each project, meaning at any one time the build was only fractionally broke, rather than completely unbuildable. Previously we tried a big bang approach but after several thousand errors, we quickly reverted. After each commit the tests were gradually moved over. This took around an hour or so, and therefore our allocated dojo/technical dojo time for that week was used. For the remaining projects an ad-hoc approach was taken. The first pairs to work on a project would be responsible for porting the tests over. Thankfull our other projects bar one were fairly straightforward to upgrade and were done as part of waste or kaizen.

Some of this process could be automated however things were not completely smooth. For example converting the MBUnit namespace over was achieved by project level find and replace. Other issues such as Asserts being slightly different required a manual fix. One example being asserting a exception is thrown. The MBUnit approach used attributes while in NUnit it is more preferable to use Assert.Throws. The other issue we faced was porting over the relevant build scripts and Cruise Control configs. Again there was no easy way to do this. We had a fair few CI fails when this was done, but when editing the xml build files there is no real way to test what you've done without actual trying it!

Overall the whole episode was not as bad as I thought it would be. We seem pretty stable at time of writing, and the tests are definitely quicker to run locally. We still have slow tests, and as part of waste we'll be looking into whether these slow tests are needed. One interesting practice I've noticed over the upgrade is how many dodgy tests we've removed. Tests such as Assert.IsNotNull after creating a new object - the sort of tests everyone writes when starting TDD have been removed. These legacy tests serve no purpose now, but were the key starting point of the TDD introduction to Codeweavers several years ago. Other tests which are covered else where or simply not needed were also removed. The final issue we are aiming to improve is that of our regression/acceptance tests, many of which are Selenium tests.

Would we recommend upgrading your test suite to the latest/next best thing? Not unless you can prove with figures that it has an actual benefit. We provided no value to the business by doing this, but by hopefully taking one step to increase our feedback cycle we'll see the benefit over time. If anything, we should be more likely to run our tests. As for why MBUnit was slower? It features a lot of stuff we simply don't need, while NUnit is more lightweight and just plain faster for our use. We could perhaps speed the tests even more by writing our own test runner, but the likes of Visual Studio integration are a must therefore this is no easy task.

One interesting point to conclude was that during this process there was talk about wrapping NUnit within a Codeweavers test framework, essentially meaning we could switch test frameworks whenever. Is this overkill for most projects? Most likely, but it was something to consider especially for large applications. As who knows, maybe there will be an even faster framework out there that we can upgrade to again, next year...

Comments

Popular posts from this blog

Three Steps to Code Quality via TDD

Common complaints and problems that I've both encountered and hear other developers raise when it comes to the practice of Test Driven Development are: Impossible to refactor without all the tests breakingMinor changes require hours of changes to test codeTest setup is huge, slow to write and difficult to understandThe use of test doubles (mocks, stubs and fakes is confusing)Over the next three posts I will demonstrate three easy steps that can resolve the problems above. In turn this will allow developers to gain one of the benefits that TDD promises - the ability to refactor your code mercifully in order to improve code quality.StepsStop Making Everything PublicLimit the Amount of Dependencies you Use A Unit is Not Always a Method or ClassCode quality is a tricky subject and highly subjective, however if you follow the three guidelines above you should have the ability to radically change implementation details and therefore improve code quality when needed.

DRY vs DAMP in Tests

In the previous post I mentioned that duplication in tests is not always bad. Sometimes duplication becomes a problem. Tests can become large or virtually identically excluding a few lines. Changes to these tests can take a while and increase the maintenance overhead. At this point, DRY violations need to be resolved.SolutionsTest HelpersA common solution is to extract common functionality into setup methods or other helper utilities. While this will remove and reduce duplication this can make tests a bit harder to read as the test is now split amongst unrelated components. There is a limit to how useful such extractions can help as each test may need to do something slightly differently.DAMP - Descriptive and Meaningful PhrasesDescriptive and Meaningful Phrases is the alter ego of DRY. DAMP tests often use the builder pattern to construct the System Under Test. This allows calls to be chained in a fluent API style, similar to the Page Object Pattern. Internally the implementation wil…

Coding In the Real World

As a student when confronted with a problem, I would end up coding it and thinking - how do the professionals do this?For some reason I had the impression that once I entered the industry I would find enlightenment. Discovering the one true way to write high quality, professional code.It turns out that code in industry is not too far removed from the code I was writing back when I knew very little.Code in the real world can be:messy or cleanhard or easy to understandsimple or complexeasy or hard to changeor any combination of the aboveVery rarely will you be confronted with a problem that is difficult. Most challenges typically are formed around individuals and processes, rather than day to day coding. Years later I finally have the answer. Code in the real world is not that much different to code we were all writing when we first started out.If I could offer myself some advice back in those early days it would be to follow KISS, YAGNI and DRY religiously. The rest will fall into plac…

Feature Toggles

I'm a fan of regular releasing. My background and experience leads me to release as regularly as possible. There are numerous benefits to regular releases; limited risk, slicker release processes and the ability to change as requirements evolve.The problem with this concept is how can you release when features are not functionally complete?SolutionIf there is still work in progress, one solution to allow frequent releases is to use feature toggles. Feature toggles are simple conditional statements that are either enabled or disabled based on some condition.This simple example shows a feature toggle for an "Edit User" feature. If the boolean condition is false, then we only show the "New User" feature and the "Admin" feature. This boolean value will be provided by various means, usually a configuration file. This means at certain points we can change this value in order to demonstrate the "Edit User" functionality. Our demo environment could …

Reused Abstraction Principle

This is the second part of my series on abstractions.Part 1 - AbstractionsPart 3 - Dependency Elimination PrincipleThe Reused Abstraction Principle is a simple in concept in practice, but oddly rarely followed in typical enterprise development. I myself have been incredibly guilty of this in the past.Most code bases have a 1:1 mapping of interfaces to implementations. Usually this is the sign of TDD or automated testing being applied badly. The majority of these interfaces are wrong. 1:1 mappings between interfaces and implementations is a code smell.Such situations are usually the result of extracting an interface from an implementation, rather than having the client drive behaviour.These interfaces are also often bad abstractions, known as "leaky abstractions". As I've discussed previously, these abstractions tend to offer nothing more than simple indirection.ExampleApply the "rule of three". If there is only ever one implementation, then you don't need …