Tuesday, 16 September 2014

Developer Diaries

A few weeks back I stumbled across a tweet (which I unfortunately cannot find to give credit to) that talked about the benefit of keeping a dev diary.

At the same time I was reading Getting Things Done (GTD) so I felt inspired to take note of everything related to development that I do during my day to day time - this would satisfy the criteria I had for my GTD system, along with trying to emulate the success the original tweet was referring to.

I don't have a fancy system as such, rather I have a text file that is distributed between the numerous desktops and laptops I have access to. Here the file is synced, so I should always be up to date. Dropbox handles this for me.

Each day I simply make a note of anything I think "I must remember that" or anything that happens to be useful, interesting or new. There is no complex system to this in order to keep in aligned with GTD, new points are simply appended at the bottom of the file. At the end of each week I simply group up related notes. For example, if I've got a few bullet points about databases, I move these to fit under a "Database" heading. This system works for now, though I might have to re-asses this in the future. An example of the file is below.

Example

Databases
    - Points
    - about
    - databases

SOA
   - More
   - points

...

####
Everything below here is a rough note
sorted at the end of a week to fit under
the headings above. If no heading exists
one is created.

The most surprising thing about this, is that even on a dull day I absorb a lot of "stuff" related to development. Equally surprising is how awful my memory is regarding it. If I skim across the document now, I'm alarmed at the stuff that I would have forgotten had I not taken a note. It's quite possible that I would remember some of this information in the long term, but regular skim readings of the diary is proving very helpful.

Thursday, 21 August 2014

Program for Change

We should program for change AKA the Open/Closed Principle. In my opinion, the OCP is one of the lesser respected SOLID principles. One of my biggest, and earliest failures fresh out of university was ignoring this concept.

At the time I was applying YAGNI to some code myself and a couple of other developers were working on. After all agile methodologies promote this concept heavily. This made sense to me. My solution was to solve the problem with the minimal amount of fuss, however in doing so I strongly coupled the code we produced with the direct business requirements.

The requirements stated that we would have three different types expenses. So I promoted that we model these three types of expenses directly. The UI knew about these expenses. The database knew about these expenses. The domain logic knew about these expenses.

Everything worked well for a while. We finished early. We wrote just the code we needed. I was happy. Until the business requirements changed. The three types of expenses became four, then three again, then one was replaced completely. Bugger.

The code was unusable. Everything knew just enough to get by, so when the change came in, everything needed to change. My team was confident this would be OK. After a few hours of analysis, we concluded the code was a train wreck. We'd need to restart from the beginning in order to make the proper changes we wanted. I was pretty gutted, however I learned a very important lesson.

YAGNI is about features, not code.

If I was to complete this feature again, I would still start with the simplest thing that could possibly work. Most likely the code would explicitly know about each type of expense, yet my tests would be wrote in an agnostic manner. I would still apply YAGNI, but at a feature level. In other words, I wouldn't write an expense logger, if all we need to do is validate and calculate expense totals.

During each refactor stage of the TDD cycle I would remove any specific expense knowledge. After a while I would end up with the various parts of the application working with a generic expense algorithm. The tests would drive us towards how the algorithm would work.

The beauty here is that if a new expense was to be introduced, this change would be data driven. We would be able to give this the business for "free".

I still regret this mistake, but this lesson has lived with for some time and has proved to be a valuable experience.

Tuesday, 12 August 2014

Stop.Mocking.EVERYTHING

I've flip flopped on how to use mock objects since 2008. It's took me nearly five years to finally claim to have a solid, practical answer on what is in my opinion, their correct use.

Mock Everything

Some developers told me to mock everything. Every. Single. Collaborator. I wasn't sure about this approach.

  • My tests felt too brittle - tied to implementation details.
  • My tests felt like a duplication of my production code.
  • Your test count rises rapidly.
  • This style of testing will slow you down - more to write/execute/debug.

Mock Nothing

Some developers told me to mock nothing. Sometimes I never used mocks. I wasn't sure about this approach either.

  • My tests felt too loose - it was easy to introduce bugs or defects.
  • My production code suffered as I introduced accessors only for testing.

No wonder I was confused. Neither approach seemed to be comfortable with me.

Solution

  • Use mocks for commands
  • Use stubs for queries

This halfway house is built around the idea of command and query separation as detailed by Mark Seeman. This simple principle makes a lot of sense, and finally helped me realise how best to use stubs and mocks.

  • Any commands (methods that have no return type) should have a mock object verifying their use if they are architectural significant.
  • Any queries (methods that have return types) should have a stub object that is returned if their use is architectural significant.

If the collaborator is not significant, or in other words is simply an implementation detail then no mock or stub is needed. That's right, just new up (or instantiate) your dependency there and then. This allows you to refactor the internals aggressively, without the fear of breaking or rewriting tests.

This approach has served me well for a while now, and in fact can be achieved even without the need to use a complicated mocking framework, though that will be the subject of a future post.

Tuesday, 5 August 2014

Acceptance Testing need not use the Full Stack

  • Joined a team with thousands of unit tests (~10k)
  • But bugs still got through our QA process
  • How could this be?
  • Team had a small number of full end to end live service tests
  • So my answer was to just increase the number of these
  • Surely this would solve our problem?
  • Not quite
  • The maintenance of these tests were a great burden
  • Each day many tests would fail, but nothing would be "broken".
  • Data would have changed in the DB
  • The UI could have changed
  • The browser could have been slightly slower
  • And so on

Solution

  • Delete the majority of live service tests - limit the tests to the core user journey through the application
  • As long as the pages load up, without an error we shouldn't care
  • Stopped testing logic or behaviour - made the tests loose, e.g. as long as value is not null or empty we are OK, we don't actually care what the value is.
  • Made use of contract testing to substitute boundaries with in memory fakes, e.g. persistent storage. This allowed fast, stable acceptance tests to be run against the system without the brittle nature described above.

Benefits

  • Small handful of live service tests (using real DB, UI) caught the majority of the serious flaws that snuck through
  • Future bugs were missing unit tests thanks to contract testing
  • Faster to write
  • Easier to debug
  • Faster to execute!

The key point was the use of contract testing. Without contract testing, writing automated acceptance tests is a pretty awful process.

Data requires setup and tear down. Any data changes can break your tests and the UI is often in flux.

By substituting the UI layer, or the DB access with fakes such as a console view, or in memory hash table, your tests can still cover the whole stack, but in a more stable, bite size manner. You simply test your real view or data access separately to prove they work, and can in fact be swapped out thanks to the Liskov Substitution Princple (LSP) by running the same suite of tests against your fakes!

I'll be expanding on how and what contract testing is in a future post.

Tuesday, 29 July 2014

I Need to Stop Misusing Namespaces

At the recent NSBCon one interesting question that came about was how to structure a project. The panel consisting of various speakers had no answer, after all this is dependant upon the project in question. Therefore there is no right or wrong answer.

However one point they were in unison about was splitting the domain and technical implementation of a project apart by the correct use of in namespaces.

This is not the first time I've come across this, but I find myself breaking this principle on a regular basis. For example a typical project I work on looks like the following.

/Controllers
   FooController.cs
   BarController.cs
   BazController.cs
/Models
   FooViewModel.cs
   BarViewModel.cs
   BazViewModel.cs
/Helpers
   FooHelper.cs
   BarHelper.cs
   BazHelper.cs

Problems

  • The namespace reflects a technical implementation detail, and not the problem domain.
  • Using Foo as an example, here the namespace is duplicated within the name of the types, which in turn defeats the point of namespaces.
  • Another issue is that the types can be much longer than they need to be, which is often a criticism of enterprise software development, when the names of objects roll off the screen because they contain so many patterns or conventions.

Solution

Use namespaces for related domain responsibilities. In turn, group together the objects and types that are used together.

An example of a better solution therefore would be:

/Foo
    Controller.cs
    Helper.cs
    ViewModel.cs
/Bar
    Controller.cs
    Helper.cs
    ViewModel.cs
/Baz
    Controller.cs
    Helper.cs
    ViewModel.cs

Benefits

  • Things that change at the same rate, would live logically next to things that also need changes. In other words if I update the FooViewModel, chances are I'll need to update views or controllers.
  • Less typing if you don't suffer a namespace clash!
  • You can still prefix the namespace where required, e.g. Foo.Controller if you have a clash or prefer the readability.
  • Shorter type names!

While this is the ideal way of structuring our applications it's not always possible. Some coding conventions actually encourage the first example, and depending on the configurability of certain frameworks this may prove difficult. That aside, I'll be making a strong push towards structuring my projects correctly going forwards.