Tuesday, 18 November 2014

Ratcheting

Some tasks in software development are mundane such as formatting and code conventions. Where possible tooling should take away some of this pain, however sometimes you need a developer to take on a task that requires a great deal of time and/or effort to complete. Tooling will only get you so far.

An example of this would be declaring that all projects build and compile with zero warnings. I've tried this in the past after a team retrospective. We had hundreds of errors per project, covering about fifteen projects at the time. Spending several weeks of development time resolving these would not have be fun nor financially viable. However we really wanted to implement this change

Solution

  • I wrote a single test which would execute as part of the build process that asserted the count of the errors per project.
  • Every now and then whenever I had some slack time (10 mins before a meeting, 30 mins at the end of the day etc...) I would open up a project and fix some errors. Then run the test and try and lower the number of errors it was asserting against until I hit the lower limit.
  • Rinse repeat this process and after a while a project would assert that there are no errors.
  • From here on it was impossible for a developer to commit in a change that would raise a warning.
  • The limit would ensure that during this period no new errors were added, increasing the work load.

After a month or so all the projects reported zero warnings. Going forward the test was modified so that new projects added to source control would be checked and have the same tests run against them, meaning no new projects can have a warning count greater than zero.

It turns out this has been documented before - its called Ratcheting. While I didn't know it at the time its nice to have a name to use when describing this technique.

Thursday, 6 November 2014

Dependency Injection (DI) Containers

Strengths

One place for configuration
Rather than scattered through out the system. Most DI containers have some sort of "module" system where you group associated components together.
Scoping
Different types of lifestyle can be achieved. Per request, per thread, singleton and others. Usually other frameworks have the ability to plug into these containers, meaning such features integrate nicely.
Feature rich
Included along with the basic DI components is usually a large amount of additional features which may or may not be needed.

Weaknesses

Heavyweight
Usually in the form of frameworks or libraries. DI is a simple concept, but such containers can make getting to grips with it tremendously difficult.
Config
Configuration can be difficult. Rather than just applying DI you need to learn the tooling. XML configuration has widely fell out of favour, but even code based configurations can be costly to setup.
Runtime errors
Any errors that might have occurred at compile time (in a static language) now become runtime errors. Circular references are easily introduced if you are not careful. Made a mistake during configuration? The system will be out of action. If you're lucky the stacktrace can point you in the right direction, but usually these are vague and/or confusing.
Magic
With the container in charge you lose control of what should be an easy part of your development process. The more convention based configuration you apply, the more chance things can go wrong. Simple changes such as multiple implementations of an interface can prove difficult to configure without breaking previous conventions. Much of the time adding a new class to the system feels risky - you won't know until runtime if you've got it working.

Alternatives

KISS
Keep your dependency wiring at your application root - most likely main. This is my preferred, default approach to begin with.
KISS - Modules
If this configuration starts to get out of hand - use modules. Need to modify how the kitchen is built? Just open up KitchenModule.cs. With direct access to the references of these dependencies you can control scoping. For example you can re-use the same kitchen instance between house instances.
Refacator
As always you can refactor towards an DI container if you feel the need to use one.

Thursday, 30 October 2014

Practice, Practice, Practice

The final part of my "4 years as a Dev series" has the same conclusion as the last set of retrospective posts.

Continuous learning, practice and improvement is required.

  • Books
  • Blogs
  • Videos
  • Twitter
  • Conferences

All of these mediums help, but as I've said before, practice, practice, practice.

Do it right - violate YAGNI

You Ain't Gonna Need It or YAGNI is about not writing code that is not needed. I've gone on to realise how important this is when it comes to programming for change.

One of my biggest pet peeves that I have experienced working on agile teams is the excuse of YAGNI.

YAGNI is no excuse for doing a "proper job". The third step of the TDD cycle allows you to take the simplest thing that could possible work and refactor it into something more dynamic, flexible or just plain better.

If you spend your time writing the simplest thing possible such as brain dead procedural statements one after the next, the whole benefit of using TDD or writing automated tests is gone. You'd be more than capable of doing this yourself.

My discover here was simple. Don't skip the refactor part of TDD. Don't allow someone to play the YAGNI card. Do it right.

Monday, 13 October 2014

Characterization Tests

Having worked with some truly awful codebases a common problem tends to arise every now and then. You need to make a change within some legacy component that most likely has limited or no automated tests around. This can be a scary process.

There are a few techniques you can use to limit the fear of breaking some legacy code such as sprout methods or classes, however these aren't always optimal in all scenarios.

Another option is characterization tests or "what is this bit of code actually doing?".

  1. Start with a simple test such as "ItWorks".
  2. Run the test - watch it fail.
  3. Using the stacktrace or error reported, write some additional setup.
  4. Run the test - watch it get past the previous error.
  5. Rinse and repeat step 3 - 4 until green.

As part of the first step you should keep the initial test as simple as possible. For example if an input to the system under test (SUT) takes a Foo object, just instantiate Foo. Don't start setting values or fields on Foo. Let the failing test indicate what needs to be set such as a BarException informing you that "bar must be greater than zero" as part of step three.

By now you should have exercised a good chunk of the system under test. However you may need to add additional tests. For example if the code contained an "if" statement, you would need at least two characterization tests. A good way to detect how many tests you need is a code coverage tool, or manually inserting assertions into the SUT to show any missing coverage. Likewise a good manual review is required to fully detect any other tests you may have missed such as boundary cases.

Now the fun can begin. You can refactor like crazy.

Afterwards you should have a nicely refactored component that you can easily extend or modify to add your new feature. You also have a solid suite of tests to prove you've not broken anything. These tests will also document the current behaviour of the system - bugs included.

Examples