Thursday, 30 October 2014

Practice, Practice, Practice

The final part of my "4 years as a Dev series" has the same conclusion as the last set of retrospective posts.

Continuous learning, practice and improvement is required.

  • Books
  • Blogs
  • Videos
  • Twitter
  • Conferences

All of these mediums help, but as I've said before, practice, practice, practice.

Do it right - violate YAGNI

You Ain't Gonna Need It or YAGNI is about not writing code that is not needed. I've gone on to realise how important this is when it comes to programming for change.

One of my biggest pet peeves that I have experienced working on agile teams is the excuse of YAGNI.

YAGNI is no excuse for doing a "proper job". The third step of the TDD cycle allows you to take the simplest thing that could possible work and refactor it into something more dynamic, flexible or just plain better.

If you spend your time writing the simplest thing possible such as brain dead procedural statements one after the next, the whole benefit of using TDD or writing automated tests is gone. You'd be more than capable of doing this yourself.

My discover here was simple. Don't skip the refactor part of TDD. Don't allow someone to play the YAGNI card. Do it right.

Monday, 13 October 2014

Characterization Tests

Having worked with some truly awful codebases a common problem tends to arise every now and then. You need to make a change within some legacy component that most likely has limited or no automated tests around. This can be a scary process.

There are a few techniques you can use to limit the fear of breaking some legacy code such as sprout methods or classes, however these aren't always optimal in all scenarios.

Another option is characterization tests or "what is this bit of code actually doing?".

  1. Start with a simple test such as "ItWorks".
  2. Run the test - watch it fail.
  3. Using the stacktrace or error reported, write some additional setup.
  4. Run the test - watch it get past the previous error.
  5. Rinse and repeat step 3 - 4 until green.

As part of the first step you should keep the initial test as simple as possible. For example if an input to the system under test (SUT) takes a Foo object, just instantiate Foo. Don't start setting values or fields on Foo. Let the failing test indicate what needs to be set such as a BarException informing you that "bar must be greater than zero" as part of step three.

By now you should have exercised a good chunk of the system under test. However you may need to add additional tests. For example if the code contained an "if" statement, you would need at least two characterization tests. A good way to detect how many tests you need is a code coverage tool, or manually inserting assertions into the SUT to show any missing coverage. Likewise a good manual review is required to fully detect any other tests you may have missed such as boundary cases.

Now the fun can begin. You can refactor like crazy.

Afterwards you should have a nicely refactored component that you can easily extend or modify to add your new feature. You also have a solid suite of tests to prove you've not broken anything. These tests will also document the current behaviour of the system - bugs included.

Examples

Saturday, 4 October 2014

Reinvent the Wheel, Often

We are often never told to reinvent the wheel. In other words, if your job is solve problems within Domain X you shouldn't spend your time recreating or solving problems that fall outside of this domain.

For production code, this I agree with this statement fully. Software development is hard enough. The last thing we want is to waste resources such as time or money on anything we can get away with not implementing. For example, creating your own web framework is a project within itself. All you'll end up with is a slow, buggy, badly implemented version of a web framework that happens to power your domain. Sadly I have been on the receiving end of such decisions.

There are two times however, when reinventing the wheel is a good thing.

  • You can't get the product off the shelf
  • Learning or personal benefit

Chances there is no web framework, database client, caching layer or so forth that you can use is very slim. Some systems become so bespoke or scale to such volumes that recreating such components makes sense. These are the Netflix/Facebook/Google of the world. Most enterprise software will never reach a slither of this sort of scale.

The biggest benefit of recreating well known, solved solutions is the vast amount of learning and knowledge you will obtain. In the past I have re-invented numerous wheels, but each time taken away something of value.

Systems that seem simple at first such as static website generator, turn out to be incredibly complex once you understand the full set of scenarios and edge cases you must handle. The key point here is these wheels, never make it into production for the reasons detailed previously.

In turn you will come to appreciate library and framework developers if you can fight the urge to resist Not Invented Here Syndrome. Their full time project is the delivery of that solution. They have the time to solve all the edge cases you don't. Not to mention the vast amount of other users that will have debugged and improved the solution going forwards. By not reinventing wheels you get as much time as possible to focus on delivering your solution to the domain problem in question, which after all is your job.

Tuesday, 23 September 2014

DDD Validation

Validation within an application (specifically in terms of Domain Driven Design - DDD) can be solved in a variety of ways.

  • A validate method on the entity/value type in question
  • An IsValid property/accessor on the entity/value type in question
  • A separate service could be used

Validate Method

Adding a validate method would work, but the flaw with this approach is that you lack any context of what is happening to the object in question.

Validate Flag

Some sort of flag on the object that denotes whether or not the object is in a valid state is undesirable. Firstly it forces the developer to ensure they check this at the correct time. If the object is invalid, exactly what do you do at this point? This approach is often used with a combination of a validate method that returns the exact error messages.

Validator Services

A separate service seems less than ideal at first when you consider developing a richer domain model, but this solution has numerous benefits. Firstly unlike the two solutions above you always have the context in which validation is being performed. For example, if you are saving a customer you will most likely want to perform different validation to what you would perform when loading up an aggregate.

An additional point to consider is that most validation is not business logic. In other words, checking for null references is not a business concern. Therefore separating this from your domain objects makes a lot of sense. The only logic the domain objects should contain is business logic.

As each service is a separate object, you gain the benefits of the single responsibility principle (SRP). Meaning testing, development and future changes are easier.

Example

The beauty here is that each validator (a simple function in this case) can be used in the correct context. E.g. when the PersonController POST handler is invoked, we use the person saving validator.