Saturday, 1 February 2014

TDD is a Tool

I remember being introduced to Test Driven Development (TDD) very well. This is because it had such an overwhelming change on how I write code day to day. It was incredibly alien, difficult, yet rewarding. On this journey for the last five years I've changed my style, learned how not to do it and finally found my "sweet spot" when it comes to pragmatic TDD.

Deliver Value

Writing code is fun. Developing an application or system is fun. Using new technology is fun. Despite this the end goal should always be to deliver value. Delivering business value over religiously following a practice was a turning point in my journey. After all the user doesn't care about what is behind the scenes, as long as they can use your software, they're happy.

When to Write Tests?

One of the guidelines when starting TDD is

"Never write a line of code without a failing test" - Kent Beck

This rule is wrong on many levels. Firstly it cripples most developers when starting TDD. Secondly the guideline is broken all the time by seasoned evangelists. Writing some framework code? Writing data access code? Writing markup? Any of these scenarios would be wasted by writing a failing tests first. This rule should be reworded.

"Writing logic? Never write a line of code without a failing test" - me

It's OK to not use TDD

After adoption TDD practitioners tend to face two challenges. Other developers looking down on non TDD practices and feeling as if they are "cheating" when not using TDD. The later was an issue I struggled with. Newbies tend to find the same problem, and this goes back to the mantra above. One of the key lessons I've discovered over the past few years is that using TDD where appropriate is fine. Not all code needs TDD. Even Kent Beck discusses this when he refers to "Obvious Implementation".

Spike Solutions

Another game changer in my journey was the concept of "Spike and Stabilize". Using this technique you can deliver business value quickly. Gather feedback as soon as possible and either fail fast or wrap the code in tests and clean it up.

CRUD

Most of the code I (and others) write is very similar. I'd bet this is the same for different fields of software development. That being said, for each CRUD app we create there is a tiny aspect of this that is unique. Using TDD to write yet another CRUD app is tedious. I'd imagine this is why many ditch the practice of TDD after some time. However the benefit comes from using TDD for that 20% of domain logic. Here a combination of obvious implementation and spike and stabilize can assist in the creation of the other 80%.

It's about Design too

TDD by Example gives the impression that the practice is primarily a testing discipline. This is not true. TDD does limit the bugs I introduce and enforces basic correctness, however bugs will still slip through. After all the quality of the code is only as good as the quality of the tests. Growing Object Oriented Software: Guided by Tests and others introduce the concept that TDD is also a design process. Listening to the tests is a core concept. In other words, if something is hard to test, chances are the code in question can be improved.

Follow the Risks

The final lesson I've come to realise is that even if you happen to work with those who don't practice TDD, you can reap the benefits. Simply test where the risk lives. Ignore the framework, standard library and simply test what has risk. This might be a small, core part of your application. Aiming for 100% code coverage is not a goal, nor one worth aiming for.

It's a Tool

At the end of the day, TDD is a tool, not a goal. In this day and age many believe that TDD should be mandatory. While I agree, the use should be restricted to where and when it makes sense. As for when and where, this is up for the developer to decide. Using some of the findings above allow me to be pragmatic, yet still have confidence in the quality of my code.

The Correct Way to use var in C#

The .NET community is not widely controversial, though there is a strong topic that appears to come up time and time again when I pair with other developers - how to use var in C#.

The var keyword was introduced in .NET 3.5. Unlike other languages this is still a strongly typed declaration. For example if we declare a string using var then we cannot re-assign this variable to another type. This would be a compile time error.

There are two parties who have strong feelings about the use of var, both of which are wrong.

Never use var

Some developers suggest the use of var be denied. This leads to code such as the following. Overly verbose, and in some cases obscuring the intent of the code. This can commonly be seen when dealing with collections or generics.

Always use var

Other developers claim you should "var all the things". This leads to code which has the opposite problem from above. The intent of the code can be obscured due to not knowing what type you are dealing with. This is especially important during code reviews or times when you are not relying on the IDE's intellisense to remind you what you are dealing with. After all code is read many more times than it is written.

Best of both worlds

The solution to this issue is simple. Where the type cannot be inferred just by looking at the source code (aka the type is on the right), use a strongly typed declaration. Where the type can be inferred, use implicit typing. Using the same examples as above, this would look like the following.

As with most things when it comes to software development, there is never a black and white answer. Always gauge decisions and patterns based on context. Just because automated tooling such as the excellent Resharper suggests you use implicit typing doesn't always make it correct.

Bonus

Talking of Resharper, a quick Alt+Enter on a type/implicit declaration will allow you to switch between modes, meaning you can be lazy and have the IDE pull in the right type when required.

Top Down vs Bottom Up

Top down development has you starting at the highest point in the application that you can. From here you code down until there is nothing else left to develop. Once you reach this point you should be code complete. Along the way you may need to stub out areas that have not yet been created, or designed.

Bottom up development has you starting at the lowest point in the application. The idea being that this part of the application has the most complexity or will be the most important. You will build the system up from a series of smaller components.

Top down development and bottom up development was introduced to myself in my early days of university. At the time the distinction didn't really mean much - I was very much a developer who would work from the bottom up.

Over time I have completely switched my stance on this. I believe agile practices and TDD are the reason for this change. I feel so strongly about this that I would go as far as to claim that within an agile team - bottom up development is an anti pattern.

Consider the following tasks to be completed on a team of four developers.

  • Create controller - main entry point, request mapping.
  • Create service - service layer, simple business logic.
  • Database query - thin wrapper around complex DB query.

With a bottom up approach a pair of developers could work on the complex database query. After some time they would have this working. The other two developers could start with the controller or service.

The problem with this approach comes from the painful integration process. The developers working on the service might be coding against the interface the team discussed during a planning session, while the developers on the query may have had to change their approach.

This example is trivial, but imagine a story with thirty tasks, more developers and more complexity and this bottom up approach is difficult. Over the past few years my top down approach has evolved.

My first step would be to stub out the workflow with the above implementation. There is no real logic here - only the objects collaboration is implemented. At this stage there are no tests, TDD would not be used. After all there is no logic here. The code is so simple it can be reasoned about with peer review, planning sessions and so on.

At this stage all of the tasks are open for any developer to pick up. If a breaking change was required, there would be no way for one pair to commit these changes without the other pair knowing. Another benefit of this approach is that an end to end acceptance test could be wrapped around the functionality from the get go.

As part of these tasks each developer would use TDD. Remember no tests exist at this point. Building up the tests in stages would ensure the logic of how the objects collaborate is preserved, and ensures that the actual domain logic that is implemented is correct. Does this mean we aren't doing TDD? No, of course not. The tests will drive the implementation. If we need to introduce new objects that is fine - these simply become implementation details that the other devs need not worry about as long as the workflow is not broken.

This approach to top down development isn't new, though many don't appreciate its benefits. I plan on expanding on this style of pragmatic TDD in the coming months.