Tuesday, 11 October 2016

The New Guy

Everyone is new at some point. No matter your experience level. You're either new to the team or new to the business. Being the new person is both a blessing and a curse.

You're New

When you're new you come with no baggage. You're full of questions and curiosity.

  • Why do we do it this way?
  • Isn't there a better way of doing this?
  • Have you considered this instead?

These are all great questions for new starters to ask, and for teams to hear.

You Have a New Team Member

When you have a new team member you gain someone with a fresh perspective. They're full of questions and curiosity. Rather than history, they'll be open to new and fresh challenges. A new member can ask you to question current practices. It is very easy to overlook problem areas only until someone with a fresh outlook arrives.

How to be New

There are two roles a new team member must play.

  • Learning
  • Challenging

The learning phase should involve questions, shadowing and pairing. The goal is to learn about the system, the architecture and the business.

The second phase should be to challenge and question the status quo. Provide better solutions, or ask for justifications and explanations. This is both win-win for the team and the new member. They'll learn and the team will gain a fresh insight into their successes and failures.

The key part of being a new team member is balance within these areas. Too much learning and no challenging will benefit no one. Likewise kicking up a fuss over every detail is not going to end well.

New Starter Balance

A past mistake I've made is swaying towards learning the system, versus challenges areas that were clearly wrong or needed improving. This is a tough area, as you don't want to rock the boat, but at the same time some rocking is required. The key is to balance this.

Advice to my past self would to tackle areas that you can have an impact in. For example a neglected process or area. By picking your battles in this manner you can slowly build your brand within the team, further allowing you to take on the more controversial challenges. For example if you've been around for a while, and proven yourself you'll have an easier time suggesting and implementing change.


  • Remember the Monkey and Banana Analogy.
  • Balance between learning and challenging when a new starter.
  • Start slowly when a new starter, stack up small wins over time instead of a big bang approach.
  • Embrace new starters, use them to test your processes and documentation.

Monday, 3 October 2016

Constant Object Anti Pattern

Most constants are used to remove magic numbers or variables that lack context. A classic example would be code littered with the number 7. What does this refer to exactly? If this was replaced with DaysInWeek or similar, much clarity is provided. You can determine that code performing offsets would be adding days, rather than a mysterious number seven.

Sadly a common pattern which uses constants is the use of a single constant file or object.

The beauty of constants is clarity, and the obvious fact such variables are fixed. When a constant container is used, constants are simply lumped together. These can grow in size and often become a dumping ground for all values within the application.

A disadvantage of this pattern is the actual value is hidden. While a friendly variable name is great, there will come a time where you will want to know the actual value. This forces you to navigate, if only to peek at the value within the constant object. A solution is to simple perform a refactor to move the variable closer to use. If this is within a single method, let the constant live within the method. If a class, let the constant live at a field level. Finally if the constant is used across multiple classes, find a shared home and rely on a well thought out namespace.

A similar issue regarding constants is the use of configuration files or similar to set the values. While the const keyword is dropped in this case, the object performs the same role. A public key, followed by a value that is used. The anti pattern in this case is treating all values as requiring configuration. Unless you need to change such values at runtime or based on deployment models, inline constants are much preferred. Literal values, mainly strings can often be left inline with limited downsides also. For example, a fixed, relative file path is much better left inline. If you are worried about lack of context, then the use of named arguments can help.


  • Keep constants local to methods, or classes.
  • Avoid constant objects or files as these will become bloated and lack context.
  • Only introduce configuration for aspects that need or will change. Defer second guessing.
  • Use named arguments to add context for inline variables.

Monday, 12 September 2016

New and Shiny Things

There is risk with upgrading anything, be it language, framework, library, OS or third parties.

In the past I was rather gung-ho about upgrading. New version out? We need it. In fact, this need is often a want. The new version often seems better. Developers seem addicted to the latest and greatest.

One of the best, but also one of the worst problems with software development is weekly there is something new to use or try. Keeping pace is impossible.

Internet Echo Chamber Effect

If you look at a news article on the release of something, you feel as if you are the only person not using it. Everyone is is using it, we need to as well.

In fact this is quite the opposite case. A site about the latest web framework will seem as if everyone is using the framework apart from yourself. This is known as the Internet Echo Chamber Effect.

Wait for a Patch

Wise advice I received and saw others follow was the minor or patch adoption. If version 2 comes out, wait for 2.1. Let others find the issues and wait for the version to stabilize. If you really must use version 2, use it in a low risk way. Personal projects or in house solutions make sense. You can keep pace but reduce risk in this manner.

Boring but Stable

Another approach is to take widely used, stable solutions. Avoiding anything new or cutting edge except for personal projects or internal projects.

If your job is to write software to sell widgets, focus solely on that, what you use behind the scenes really doesn't matter. As long as you can delivery value and aid the sale of widgets you're on track for success.

A similar alternative is to use boring solutions for anything that has high risk. While using newer, more exciting solutions for low risk projects. Again risk is managed and reduced. If the new, cutting edge solution becomes the norm, eventually you can adopt this in the future.

A younger, less experienced self would not find this advice at all appealing. After all if the tests pass why can't you upgrade to the latest and greatest? The main issue is risk, which will be the subject of a future post. Every single change, be it a single line of code has risk.

The one exception to this advice is security concerns. If a security release is available you should aim to upgrade as soon as possible. Usually such releases form minor releases, meaning risk is low and matches the delayed upgrade path above.


  • Any change has risk.
  • Reduce risk when handling new technology.
  • Either use stable versions or boring solutions.
  • Play and test new technology on the side, in low risk scenarios.
  • What technology you use to build something actually doesn't matter in most cases.

Wednesday, 31 August 2016

Past Mistakes - ORMs and Bounded Contexts

Sticking with the theme of documenting past mistakes, it's worth expanding a real life scenario where I was unaware of the use of bounded contexts and fully understanding the tools you use.

Ignoring a Bounded Context

A fellow developer set upon a quest to rid numerous projects of duplicated records, which was effectively the active record pattern. This was a huge under taking split across hundreds of thousands of lines of code, over numerous separate projects. Close to completing the task I assisted and finally the shared library containing a single record for each type was complete. Each project now referenced the shared copy. This was versioned as each build was completed.

For a while this worked with no problems. It certainly felt nice to see the reduction in duplicated code. Sadly sometime later myself and another developer made a seemingly innocent change. In terms of relation, the change was as far removed from the production error that we had just been alerted to was. There was no link. It was a different project, in a different path, on a different model. The only commonality was the fact the issue only occurred after the previous deploy.

ORMs and Changes

Several minutes of panic later, the problem was spotted. While the model we had changed had no direct relation, indirectly there was. As each record was loaded by the ORM in question, links and dependencies where also loaded or checked. So were the children's links and dependencies. Finally this would hit the newly changed record. Due to the database changing ahead of the library, numerous other projects now had a runtime error. As we naively believed we were only working within a single project, we deployed the changes within the one project. As the library was shared, all other projects were now vulnerable.

This lack of bounded context, and focusing on removal of duplication was not the only lesson here. This issue painfully highlighted the need and importance to know exactly what your tools are doing, especially when they are hidden behind the scenes. In fact, my use of ORMs other than micro-ORMs is next to non existent at present.


  • Use bounded contexts.
  • Favour loose coupling, over reduced duplication.
  • Anything shared must be deployed and tested as a single unit, otherwise remove the shared component.
  • ORMs (or other tools) should be understand and respected.

Wednesday, 24 August 2016

Test Your Live System using Live Service Tests

Traditionally there are three categories of functional tests.

  • Acceptance
  • Integration
  • Unit

This is often refereed to as the testing pyramid. Unit tests form the bulk of your suite, followed by a smaller subset of integration tests. Acceptance tests that cover features should be the tip of your testing strategy, few in number. These are great but there is a missing suite of tests - live service tests.

  • Live Service Tests
  • Acceptance
  • Integration
  • Unit

Live Service Tests.

The role of live service tests (LST) is to test the live system against the production environment and configuration. LST would be fewer in number than acceptance tests. Unlike acceptance tests, these should run constantly. Once a run has completed, kick of a new test run. This will require a dedicated machine or piece of infrastructure, but the value provided is well worth it.

LST should focus on journeys instead of functionality or features. In contrast to acceptance tests a user journey would be the core purpose of the system. For example, a LST suite to cover this blog would ensure the home page loads, an individual post can be loaded, and the archive is accessible. The rest of the site such as comments or social media interactions could be broken, but the core purpose of the system is working. Readers can read and browse the blog. If at any time the tests detect a failure in the core journey there is a big problem.


LST offer the fastest feedback possible due to the fact they are constantly running. It is far more desirable to detect a problem before your users do. Naturally LST offer great protection after deploys. Deployment of new code is one of the times you are more likely to encounter issues, so a suite of tests triggered after a deployment is a natural fit. LST also protect against unplanned events. In my experience, exceeding disk space, DNS failure, third party issues and more have all be detected.

How To

Adding another suite of tests may sound like increased effort but the cost associated with LST is rather low. Sometimes acceptance tests can be run as LST, meaning no extra effort. Care must be taken here if the tests perform anything destructive or anything that interacts with third parties.

Alternatively writing LST is simpler than standard acceptance tests. The same tooling can be used such as Selenium, NUnit and so forth. As the tests themselves focus on journeys rather than functionality, the tests are often less complex to write.

The only difficulty LST introduce is the fact they are executing against the live system. Consider interactions with a third party. Using a real account on the real system may be problematic. One way to get around this issue is to embed test functionality within the system itself. For example you could set up a test account which the tests use. Instead of executing against the third party system, the dummy account is used. Likewise most third parties have test accounts which can be setup and used instead.

LST are a nice compliant to a diagnostic dashboard. If your dash is reporting no issues, and your tests are green, you can be confident the system is operating in a good enough state.


  • Functional tests are not enough.
  • Use live service tests to test the real production system.
  • Run live service tests constantly for the earliest feedback possible.
  • Alter production code to introduce test functionality.
  • Make use of test accounts and anything else that third parties may offer.