Friday, 27 February 2015


This is the first part of my series on abstractions.

Coupling is one of the enemies of a healthy code base. One way to combat high coupling is to introduce abstractions.

Too few abstractions is bad. Your code can become coupled. Some of the worst code I've worked with was highly coupled to either the database, UI or both. Working with such code is difficult.

Too many abstractions is equally as bad. Abstraction behind abstraction can become so difficult to work with the benefit of abstracting in the first place is lost. Some of the worst code I've worked with was so convoluted with needless abstractions it made any development a tricky process.

Most abstractions are not really abstractions at all, but nothing more than simple indirection. Indirection is sometimes required, though it is wrong to confuse with abstraction. IFileWriter is not an abstraction. IReciept that happens to write to the file system when implemented as FileSystemReceipt is an abstraction. IFileWriter could be an abstraction if the software we were writing involved working directly with the file system, such as a text editor. In the case of printing receipts, where they are printed is simply an implementation detail.

Finding a balance between the right level of abstraction can be tricky. From my experience there a few techniques that can be used.


Embrace Coupling

Udi Dahan makes this point in his presentations. If you have a traditional application with a UI, domain and data layer why bother adding further layers to abstract these? If we wish to retrieve a new field from the database and display the value we have three places to change, adding further models and mapping layers does nothing but increase coupling. Applying namespaces correctly can also help here, if everything that needs to change at the same time is logically grouped, such changes are easier.


Do you truly need a database model mapped into a domain model, mapped into a view model and back? Applying YAGNI can limit many abstractions by simply not worrying about "what if" scenarios until they actually occur.


Command Query Responsibility Separation or CQRS deserves an explanation on its own, but for now applying CQRS reduces unnecessary coupling by embracing it. For querying data and displaying it on a screen my default choice is to use CQRS to simply read from the database and populate a view model. This limits abstractions and helps keep the code focused, flexible and open to change. I will expand on CQRS in a future post.

Tuesday, 17 February 2015

Guard Clauses and Assertions

Two simple techniques to increase code quality, resilience, and ease debugging scenarios is to use guard clauses effectively and ensure that assertions are used liberally.

Guard Clauses

  • Any public method should perform guard clauses to ensure pre conditions are met.
  • Ensures the code's invariants are not broken.
  • Throw exceptions, because these are exceptional issues.
  • Developer and user assistance as it is possible for these clauses to fail at runtime.

Here we enforce that any PersonalDetails instance has a forename and surname. A forename must also be at least one character long. As long as these conditions are met, we finally assign the values internally. Guard clauses should also be used on dependencies that are services, checking that a service is not a null instance for example.


  • Used within private methods/functions where required.
  • Should be used for situations that should never happen, e.g the presence of a bug or invalid scenario.
  • Developer only assistance, the user should never see these ideally because automated/manual testing should have detected them.
  • Usually removed for release builds, though open to debate, best to judge on context. Is it better for the program to crash and inform the user, or carry on in an invalid state?
  • Great for documenting assumptions, e.g. code a level above ensures object is in a certain state.

While this method is private, we have essentially stated that we take no responsibility for validating that a name has been provided. This is the concern of another part of the code (the constructor in this case). However this simple assert statement means that if the method is used in a different manner, it will fail spectacularly at runtime. This will point at the incorrect use of the method and allow the developer to make the required changes.


Code quality will improve because less invalid scenarios should be allowed to happen. Due to clauses and assertions always being present they go hand in hand with automated tests, often catching scenarios that automated tests may miss. Debugging is easier because the stack trace points you at the source of the problem, rather than an initial problem hidden in layers of exceptions caused by invalid state. While applying clauses and assertions increases lines of code, they are easy to implement, and the return on investment is high. There are no excuses not to use them.

Tuesday, 10 February 2015


Striving for consistency within a codebase is a good thing. I'm very much someone who believes in applying a consistent formatting style, patterns and practices. However there are two sides to this view.

One colleague used to hate different apps that used different frameworks, styles and conventions. This is a fair point, it made switching between them harder. In their eyes, a change to the development process should cascade across all applications.

Another colleague used to state that without breaking consistency then improvements and progress would never happen. An equally fair point. However this lead to scenarios where some of the code would be in differing states of consistency, or improvements were avoided because they were too large to implement safely.

Like most things in software development, there is rarely a true answer. The best of both worlds is to apply both concepts at varying levels.

Applying consistency at package/assembly/module/namespace level works well from my experience. Different boundaries can have different consistency rules.

This approach allows incremental evolution, but still keeps consistency within a boundary. This enables both benefits of favouring consistency, while still allowing the code to evolve over time. Ratcheting can be used to ensure future work is aligned consistently. Rather than big bang implementation, you can perform larger, long term changes steadily.

Remember; software development is like gardening, it takes time to see the results sometimes and blindly applying a coding convention to conform to consistency requires thought.

Tuesday, 3 February 2015

Value Object Refactoring

After extract method or extract class introducing a value object is one of the most powerful refactorings available. A value object encapsulates a value or concept within your domain. While the term is more formally known from Domain Driven Design, DDD is not a pre-requresite for use. Introducing a value object can be applied to any code base.

Some excellent examples of value objects would include CustomerId, Money, OrderId and PhoneNumber. These could all be identified as integers, strings or decimal numbers, but doing so would lead to a series of downsides.

Making use of primitive data types to express concepts within an application is a code smell known as primitive obsession. Replacing primitives with value objects is the solution to this smell.

Primitive Obsession

  • Duplication will be thrown throughout the codebase. Both in the form of simple guard clauses, or core domain logic.
  • More tests are required. This ties into the duplication above.
  • Your domain lends itself towards an anaemic model, full of utility classes that operate upon state.


The implementation of PersonalDetails would be straight forward to begin with.

Over time areas of logic can slowly migrate and move towards the class. In most IDE's, simply wrapping a primitive type as the first step can be carried out in a few keystrokes.

The constructor performs basic validation on a technical level. Once complete we can carry out any domain logic. Likewise the behaviour attached to this object (hidden for beravity) would include various domain specific logic. For example, when changing surnames any leading or trailing whitespace is removed.

One recommendation would be to expose the underlying primitive. In this example ToString has been overridden to return the string value that is being used. This should be a read only operation idealy, and enables the object to play nicely with third parties. Such use cases for this would be serialization, or writing the value to a persistent store.

Equality (and hashcode in this case) should also be implemented. This is because the nature of value objects allows them to be equal to other instances that share the same value, despite being different references in memory. The beauty of this is that value objects can be used as needed, no need for injection or other patterns.


  • Removes duplication. Only the object in question will be the source of truth.
  • Less tests need to be written. As the duplication has been removed, only one test per behaviour is required. Rather than duplicating checks for validation or formatting this can be contained to the object. As the rest of the system deals with our value object, we don't have to worry about dealing with an invalid representation.
  • In statically typed languages you can lean on the compiler. It's impossible to supply anything other than PersonalDetails when we ask for an instance. Even for dynamic languages, the stack trace presented upon error would be far more useful than had a primitive type been provided.
  • The surface area of mis-configuring arguments is smaller also. Previously we would accept two strings that are order dependant. Now this configuration has been reduced to a few areas.
  • Using the example above, we can now rely on class pre-conditions to simplify our expectations when working with this type. Given any instance of PersonalDetails we can be sure that the forename and surname are never null or empty, and that each personal details instance will have a forename of at least one character long. A simple string can never guarantee such conditions.
  • Making value objects public generally makes sense. This provides an excellent seam for testing and integration.
  • The introduction of a value object plays nicely with my three basic steps to code quality.