Tuesday, 29 September 2015

The Self Shunt - Test Doubles without a Framework

Generally you should favour hand crafted stubs without a framework by default. Before you reach for a framework there is another bridging step that you can take only pulling in a framework if complexity arises - the Self Shunt.

Assume a simple Hello World subject under test where we can provide different formatters that format the message to a console, XML or JSON for example. How do we test that the formatter is used, with the right arguments?

Enter the Self Shunt (pdf). Have the test fixture implement the interface aka assume the role of a message formatter. It provides itself as a parameter to the greeter in the form of self/this. The greeter uses this implementation during its execution, the test fixture can then assert or set state.


  • Quick and simple to get up and running.
  • Most commands fall into the category of invoke something with some parameters, with little more complexity.
  • Forces you to respect the Interface Segregation Principle, otherwise this technique can become painful. A framework usually masks this complexity.
  • Code is inline to the test or fixtures.
  • Exposes and explains how frameworks work conceptually to new developers - removing some of the magic.

The Self Shunt is my default approach for testing commands which are usually local to test fixtures. Queries default to hand crafted stubs which are usually shared amongst tests. If further tests need the same configuration the shunt can be promoted to a full object that lives independently of the test fixture. Finally if this starts to become difficult to work with I would reach for a framework - commands usually reach this point first.

Tuesday, 22 September 2015

Waste: Write Less Code

One of the biggest forms of waste is code. An estimated 80% of features in a software project are never or rarely used. This makes code the software development equivalent of inventory. Having a warehouse full of inventory is not a benefit, neither is having a repository full of code.

How to Have Less Code?

Delete it!

As much as you can within reason of course, tests must pass and features must still work. Deleting feels great when you can successfully remove some legacy code. You'll be surprised at what can be removed. Commented out code, unused classes and methods are the obvious first candidates.

Say No To Features by Default

Only add them if the benefit outweighs planning, designing, development, testing and maintenance costs combined. Even then, do you really need it? The advice here is do not listen to your customers regarding which features to add, instead listen to their problems.


Try and see if a library or framework can handle your use case. They may not be a perfect fit, but if isolated correctly the use of third party code can mean a massive reduction in code you need to write. You still need to maintain and configure third party code however.

Benefits of Less Code

  • Quicker to compile/parse.
  • Tests run quicker.
  • Easier on-boarding - less to understand and familiarise with.
  • Less chance of bugs - more code is more likely to have bugs.
  • Potential performance related problems should be reduced.

Remember - code is a liability. The job of software developer is not to write code, it is to solve problems. Sometimes this takes thousands of lines of code, other times it can take a simple conversation.

Tuesday, 15 September 2015

Types of Test Doubles

Mock is an overloaded term in software development. Sadly this leads to developers answering with "mock it" when a mock object may not be the right solution. Test Doubles are a more general term. I should try to use this naming more than I do at present - a goal I aim to work towards. The result of choosing the wrong test double may seem innocent but the effect will be a very different style of test method, with increased coupling to implementation details. The following definitions are ordered in terms of complexity and increased coupling.


Provide canned responses. By their nature stubs would respond to queries. Stubs allow you to test paths of the code that would be otherwise difficult as they always provide the same answer.


Similar to a stub but with the addition that a spy records its actions. When responding to a query or a command the spy keeps track of what happened, how often and anything else relevant. The test can then inspect the spy for the answer, deciding whether to pass or fail. Unlike Mocks, spies play well with the Arrange-Act-Assert pattern. Spies let you answer the question has something happened whereas Mocks tend to lead you towards how has something happened.


Fake objects tend to be used in higher level tests. These are fake implementations of the object they are standing in for. A fake repository would be implemented in a simple manner, instead opting for a simple in memory hash table for its implementation. This allows tests to be run with some confidence that the system will behave as expected. Combined with Contract Tests, fakes can turbo charge the speed of your test execution while still providing confidence.


Similar to spies mocks are primarily in charge with recording what happens. However while spies are silent in their nature relying on the test to interrogate them, mocks differ by throwing exceptions if their expectations are not met. Mocks natural partner is commands. Unlike spies Mocks can struggle to fit into the Arrange-Act-Assert pattern. Of all the test doubles Mocks are the most coupled to implementation details so their use should be limited.

Tuesday, 8 September 2015

Release It - Highlights Part 2

This is the second part of my collection of notes and snippets from Release It!


  • Low memory conditions are a threat to both stability and capacity.
  • You need to ask whether the possible keys are infinite or finite and would the items ever need to change?
  • The simplest cache clearing mechanism is time based.
  • Improper use of caching is the major cause of memory leaks, which turn into horrors like daily server restarts.
  • Your system should run for at least the typical deployment cycle. If you deploy once every two weeks, it should be able to run for at least two weeks without restart.
  • Limit the amount of memory a cache can use.
  • Caches that do not limit memory will eventually eat all memory.


  • Every integration point should have a test harness that can be substituted.
  • Make your test harness act like a hacker - try large payloads, invalid characters, injection and so on.
  • Have your test harness log requests so you can see what has caused problems.
  • Run longevity tests - tests that put impulse and stress upon a system over long periods of time.
  • Someone saying "the odds of that happening is millions to one" is actually quite likely to happen. Given a average site, making thousands of requests a day this is an easy target to hit.
  • Sessions are the Achilles heel of every application server.
  • Most testing uses the app in the way it was expected to be use such as load testing a site using the correct workflow. What about load testing without using cookies? Would this spawn a new session each time?


  • Whitespace costs! In HTML (or the markup generated) remove all whitespace. It costs time to generate and money to send across the wire. You could argue this is for big traffic sites only, but this technique is very simple to apply as part of the build and speeds up client side rendering.
  • Omit needless characters in HTML such as comments. Use server side commenting instead, this will be removed when processed.


  • Precompute as much of the page as possible. Use "punch outs" for dynamic content. For example Slashdot generates its page once and serves to thousands of users. All users get the page equally as fast. Caching would mean handfuls of users would get a slow experience.
  • Precomputed content should be deployed as part of the build. For more frequent updates another strategy or "content deploys" would be required.


  • The human visual system is an excellent pattern matching machine. Make logs readable by using a custom format. Scanning logs is very easy then.
  • Two line log files are difficult. Harder to grep. Keep everything on one line.
  • Each week review the systems tickets. Try to identify and fix problems as you go. Try and predict future problems where possible based on this info.
  • Check the logs daily for stack traces that are suspicious. These could be common errors that are bugs/problems that need fixing.

Tuesday, 1 September 2015

Release It - Highlights Part 1

Release It! is one of the most useful books I've read. The advice and suggestions inside certainly change your perspective on how to write software. My key takeaway is that software should be cynical. Expect the worst, expect failures and put up boundaries. In the majority of cases these failures will be trigged by integration points with other systems, be it third parties or your own.

My rough notes and snippets will be spread across the following two posts. There is much more to the book than this, including various examples of real life systems failing and how they should have handled the problem in the first place.

Shared Resources

  • Shared Resources can jeopardize scalability.
  • When a shared resource gets overloaded, it will become a bottleneck.
  • If you provide the front end system, test what happens if the back end is slow/down. If you provide the back end, test what happens if the front end is under heavy load.


  • Generating a slow response is worse than refusing to connect or timing out.
  • Slow responses trigger cascading failures.
  • Slow responses on the front end trigger more requests. Such as the user hitting refresh a few times, therefore generating more load ironically.
  • You should error when a response exceeds the systems allowed time, rather than waiting.
  • Most default timeouts of libraries and frameworks are far too generous - always configure manually.
  • One of the worst places that scaling effects will bite you is with point to point communication. Favour other alternatives such as messaging to remove this problem.


  • When calling third parties, services levels only decrease.
  • Make sure even without a third party response your system can degrade gracefully.
  • Be careful when crafting SLA's. Do not simply state 99.999%, it costs too much to hit this target and most systems don't need this sort of uptime.
  • Reorient the discussion around SLA's to focus on features, not systems.
  • You cannot offer a better SLA than the worst of any external dependencies you use.


  • Your application probably trusts the database far too much.
  • Design with scepticism and you will achieve resilience.
  • What happens if the DB returns 5 million rows instead of 5 hundred? You could run out of memory trying to load all this. The only answers a query can return is 0, 1 or many. Don't rely on the database to follow this limit. Other systems or batch processes may not respect this rule and insert too much data.
  • After a system is in production, fetch results can return huge result sets. Unlike developer testing where only a small subset of data is around.
  • Limit your DB queries, e.g. SELECT * FROM table LIMIT 15 (the wildcard criteria would be substituted)
  • Put limits into other application protocols such REST endpoints via paging or offsets.

Circuit Breakers

  • Now and forever networks will always be unreliable.
  • The timeout pattern prevents calls to integration points from becoming blocked threads.
  • Circuit Breakers area way of automatically degrading functionality when a system is under stress.
  • Changes in a circuit breaker should always be logged and monitored.
  • The frequency of state changes in a circuit breaker can help diagnose other problems with the system.
  • When there is a problem with an integration point, stop calling it during a cool off period. The circuit breaker will enable this.
  • Popping a circuit breaker always indicates a serious problem - log it.