Skip to main content

Value Object Refactoring

After extract method or extract class introducing a value object is one of the most powerful refactorings available. A value object encapsulates a value or concept within your domain. While the term is more formally known from Domain Driven Design, DDD is not a pre-requresite for use. Introducing a value object can be applied to any code base.

Some excellent examples of value objects would include CustomerId, Money, OrderId and PhoneNumber. These could all be identified as integers, strings or decimal numbers, but doing so would lead to a series of downsides.

Making use of primitive data types to express concepts within an application is a code smell known as primitive obsession. Replacing primitives with value objects is the solution to this smell.

Primitive Obsession

  • Duplication will be thrown throughout the codebase. Both in the form of simple guard clauses, or core domain logic.
  • More tests are required. This ties into the duplication above.
  • Your domain lends itself towards an anaemic model, full of utility classes that operate upon state.

Solution

The implementation of PersonalDetails would be straight forward to begin with.

Over time areas of logic can slowly migrate and move towards the class. In most IDE's, simply wrapping a primitive type as the first step can be carried out in a few keystrokes.

The constructor performs basic validation on a technical level. Once complete we can carry out any domain logic. Likewise the behaviour attached to this object (hidden for beravity) would include various domain specific logic. For example, when changing surnames any leading or trailing whitespace is removed.

One recommendation would be to expose the underlying primitive. In this example ToString has been overridden to return the string value that is being used. This should be a read only operation idealy, and enables the object to play nicely with third parties. Such use cases for this would be serialization, or writing the value to a persistent store.

Equality (and hashcode in this case) should also be implemented. This is because the nature of value objects allows them to be equal to other instances that share the same value, despite being different references in memory. The beauty of this is that value objects can be used as needed, no need for injection or other patterns.

Benefits

  • Removes duplication. Only the object in question will be the source of truth.
  • Less tests need to be written. As the duplication has been removed, only one test per behaviour is required. Rather than duplicating checks for validation or formatting this can be contained to the object. As the rest of the system deals with our value object, we don't have to worry about dealing with an invalid representation.
  • In statically typed languages you can lean on the compiler. It's impossible to supply anything other than PersonalDetails when we ask for an instance. Even for dynamic languages, the stack trace presented upon error would be far more useful than had a primitive type been provided.
  • The surface area of mis-configuring arguments is smaller also. Previously we would accept two strings that are order dependant. Now this configuration has been reduced to a few areas.
  • Using the example above, we can now rely on class pre-conditions to simplify our expectations when working with this type. Given any instance of PersonalDetails we can be sure that the forename and surname are never null or empty, and that each personal details instance will have a forename of at least one character long. A simple string can never guarantee such conditions.
  • Making value objects public generally makes sense. This provides an excellent seam for testing and integration.
  • The introduction of a value object plays nicely with my three basic steps to code quality.

Comments

Popular posts from this blog

Three Steps to Code Quality via TDD

Common complaints and problems that I've both encountered and hear other developers raise when it comes to the practice of Test Driven Development are: Impossible to refactor without all the tests breakingMinor changes require hours of changes to test codeTest setup is huge, slow to write and difficult to understandThe use of test doubles (mocks, stubs and fakes is confusing)Over the next three posts I will demonstrate three easy steps that can resolve the problems above. In turn this will allow developers to gain one of the benefits that TDD promises - the ability to refactor your code mercifully in order to improve code quality.StepsStop Making Everything PublicLimit the Amount of Dependencies you Use A Unit is Not Always a Method or ClassCode quality is a tricky subject and highly subjective, however if you follow the three guidelines above you should have the ability to radically change implementation details and therefore improve code quality when needed.

DRY vs DAMP in Tests

In the previous post I mentioned that duplication in tests is not always bad. Sometimes duplication becomes a problem. Tests can become large or virtually identically excluding a few lines. Changes to these tests can take a while and increase the maintenance overhead. At this point, DRY violations need to be resolved.SolutionsTest HelpersA common solution is to extract common functionality into setup methods or other helper utilities. While this will remove and reduce duplication this can make tests a bit harder to read as the test is now split amongst unrelated components. There is a limit to how useful such extractions can help as each test may need to do something slightly differently.DAMP - Descriptive and Meaningful PhrasesDescriptive and Meaningful Phrases is the alter ego of DRY. DAMP tests often use the builder pattern to construct the System Under Test. This allows calls to be chained in a fluent API style, similar to the Page Object Pattern. Internally the implementation wil…

Coding In the Real World

As a student when confronted with a problem, I would end up coding it and thinking - how do the professionals do this?For some reason I had the impression that once I entered the industry I would find enlightenment. Discovering the one true way to write high quality, professional code.It turns out that code in industry is not too far removed from the code I was writing back when I knew very little.Code in the real world can be:messy or cleanhard or easy to understandsimple or complexeasy or hard to changeor any combination of the aboveVery rarely will you be confronted with a problem that is difficult. Most challenges typically are formed around individuals and processes, rather than day to day coding. Years later I finally have the answer. Code in the real world is not that much different to code we were all writing when we first started out.If I could offer myself some advice back in those early days it would be to follow KISS, YAGNI and DRY religiously. The rest will fall into plac…

Feature Toggles

I'm a fan of regular releasing. My background and experience leads me to release as regularly as possible. There are numerous benefits to regular releases; limited risk, slicker release processes and the ability to change as requirements evolve.The problem with this concept is how can you release when features are not functionally complete?SolutionIf there is still work in progress, one solution to allow frequent releases is to use feature toggles. Feature toggles are simple conditional statements that are either enabled or disabled based on some condition.This simple example shows a feature toggle for an "Edit User" feature. If the boolean condition is false, then we only show the "New User" feature and the "Admin" feature. This boolean value will be provided by various means, usually a configuration file. This means at certain points we can change this value in order to demonstrate the "Edit User" functionality. Our demo environment could …

Reused Abstraction Principle

This is the second part of my series on abstractions.Part 1 - AbstractionsPart 3 - Dependency Elimination PrincipleThe Reused Abstraction Principle is a simple in concept in practice, but oddly rarely followed in typical enterprise development. I myself have been incredibly guilty of this in the past.Most code bases have a 1:1 mapping of interfaces to implementations. Usually this is the sign of TDD or automated testing being applied badly. The majority of these interfaces are wrong. 1:1 mappings between interfaces and implementations is a code smell.Such situations are usually the result of extracting an interface from an implementation, rather than having the client drive behaviour.These interfaces are also often bad abstractions, known as "leaky abstractions". As I've discussed previously, these abstractions tend to offer nothing more than simple indirection.ExampleApply the "rule of three". If there is only ever one implementation, then you don't need …