Tuesday, 1 December 2015

ReactJS and JSHint

The ReactJS Getting Started Guide states that the recommended way of using React is combined with npm.

This is great but poses a problem when trying to use JSHint. The default example outputs a single JS file containing both your code and the React library. The end result is the bundle when linted contains code you don't and shouldn't need to care about.

The guide does provide a solution, though not as clear as it probably should be. Offline Transforms. These will transform your jsx files into plain Javascript without bundling react alongside.

babel --presets react app.js --out-file main.js

Simply take the result of the transform and perform your linting process.

jshint main.js

This may seem obvious but I did lose some time realising the benefit of offline transforms.

Offline transforms do require that you either bundle the transformed file with React, or you simply include the standalone JS scripts in your html. This can be done after the fact. JSHint can then play nicely with your React apps without the need for other tooling such as wrappers or text editor extensions.

Tuesday, 24 November 2015

Throw Code Away

The third and final part of my agile architecture series.

Part one suggested walking skeletons for new features or projects. Part two suggested building the limited, smallest and simplest functionality possible. However you do not always have the luxury of deferral. Likewise if the project already exists and you are amending functionality, a walking skeleton is going to be limited.


Throw code away. This sounds brutal and overkill, but throwing code away has many advantages.


  • The second time around you will solve the problem quicker having benefited from first time. The first attempt is a prototype in this case. Throwing away prototypes is expected. They are not production ready, usually built with short cuts or quality comprises intentionally.

  • The cleanest code is no code. Your following attempts will be cleaner. Knowing the issues from the previous attempt allows the ability to put code and procedures in place to prevent the same quality problems occurring.

  • Long term goals can be achieved rather than aiming for short term wins. Instead of focusing on meeting the current iterations' goal, answer whether or not your solution is fit for purpose going forwards. Does it scale? Is the quality there?

  • You benefit from hindsight. Most code to be replaced should have lived through some sort of review process. If the code has lived through production you have even more ammo to target the weak points. Where are the hotspots? What changes more frequently? Where do bugs tend to reside?


Throwing code away should not be taken lightly, but it is certainly a valid technique under the right circumstances.

You will have an easier time suggesting to start over on two days worth of work than you would two weeks, two months or two years. Keep your batch sizes small and the ability to throw code away will become easier to accept, with the benefits outweighing the negatives.

Small batches are not the only prerequisite to suggest throwing code away. Small changes are also essential. You can easily suggest throwing a method or class away, but you will rightly so have a harder time suggesting throwing away a module or system.

Refactoring is often used as a suggestion to combat the need to rewrite or throw code away but this is rarely the case in practice. Refactoring is a misused word and crucially misunderstood technique. If you change architecture you are not refactoring.

The biggest objector you will likely find is yourself. Having become invested in a task it can be hard to try again. Fight the urge to resist and throw code away. You may be pleasantly surprised by the results.

Tuesday, 17 November 2015

Don't Build a Thing

Part two of my agile architecture series.

Here is a real life example of where I treated a unknown project incorrectly. Why I handled this is badly and how I should have handled it if I could rewind time.


An external client had a proposal for a web service which would power part their new web application. This service sounded very simple. Data import and some basic querying. There were plans to add additional bells and whistles at a later date. After an initial meeting development began.

A week later a second meeting was placed. A good few hours of development had been invested by this point. The meeting was useful, however some changes had cropped up. The data format had been modified, my solution could not handle the new format. Also the querying needed various modifications.

A week later, after several more hours of changes, the second meeting landed. There were more changes. This time technical adjustments based on the feedback from the clients' developers.

The third meeting introduced scope creep. Could this service handle any potential customer going forwards? It certainly could not at present.

You should see where this is heading. Eventually the requirements stabilized. Not until several days of my time had been taken up building something that was not needed, only to have to tear it down and salvage what I could.

The end result was a project I was not proud of. Due to my heavily invested time I wanted to save as much work as I could. It would be hard to tell my superiors we've wasted X amount of money. The project also lacked long term stability. Each iteration built upon the next. The feature to handle generic customers was tacked on. Had this been known from day one, things would have looked much better both in terms of code quality and architecture.


There is an easy way to transform a unknown project into a known project - build as little as you possibly can. Do this in the shortest amount of time to gather feedback, learn and defer decisions. After this process you will be in the best possible shape to tackle the project. These principles are the key to the processes within a lean start up.

How I Should Have Handled It

Starting with a minimal project in order to demo and deploy this would do nothing other than returned a hardcoded JSON literal. Enough to demonstrate and spark conversations.

During week two the discovery that a new data format had been chosen would not matter. The feature to load data had not been written after all. At this point the hardcoded data would be tweaked to match the new content. Easy.

Week three would pose no threat. Technical changes around best practices or technology are easily handled because very little code exists.

The newly required functionality discovered in week four would prototyped, estimated and agreed. As no real work has been done, adding this feature in would not only be achievable, it would be architecturally sound rather than bolted on as an after thought.


Deferring decisions such as the above is so useful that this can be applied to any project from my experience. Knowing how long a decision can be deferred is dependent on the scenario, but you will be pleasantly surprised in many cases at just how long decisions can and should be deferred. Even for known projects the power that deferral brings is so beneficial I tend to favour this style whenever possible. Build just enough to gather feedback and go from there.

The key point is that very little time and energy has been invested. In the second example of how I should have handled the client I invested hours of my time. In reality I invested days. I was invested in the first solution. The second solution however could be chopped, changed or thrown away with no protest. The act of throwing code away is so important, yet so rarely practised it will be the subject of the third part of this series.

Tuesday, 10 November 2015

You Cannot Iterate upon Architecture

This is the first part of a series of posts as to why gradual iteration, doing the simplest thing that can possibly work over a software project fails in many cases. This series will explain why this is the case, and provide solutions.

Spotify has gave a talk on how it builds products and manages teams internally. This provides some great insights and advice. As part of this an incredibly effective image is used. This shows the production of a form of transport to travel from A to B.

In the first half of the image, the product is built in iterations. Each step adds to the next. It is not until the fourth step that the product is able to take passengers from A to B. Agile development aims to solves the issues around this.

The second half of the image is built iteratively. The goal is still the same. A product to travel from A to B. From the first version this goal is complete. However the team would be embarrassed to release in this state. Further iterations are carried out as the team learns more.

From my experience building software in this manner only works half of the time. Any software projects from my first line of code up until present day fall into one of two categories.

Known Projects or Unknown Projects

A known project would be where the destination is clear and well defined. Internal development projects, refactoring, or replacement would fall into this category. Easily half of my professional time has been spent on projects where we know what we are building and when it must be complete by.

The second type of projects is where the destination is unknown. You are working for an external customer directly. On a regular basis you regroup with the client. You gather feedback and iterate. Over the course of this process your destination may very well surprise you, along with the route you use to get there.

Refactoring is Class or Method Level Only

You could claim the image works for unknown projects. At any point the client (internal or external) could put a halt on development after their vision is complete. For known projects, the area this image fails is simple - if a car is required, build a car. If this is demonstrating a known project, building only then to start recycling, refactoring and forming the code into another shape is costly. Sticking with the vehicle analogy - building a car is complex. In one iteration it would not be possible to gather feedback until it was too late. Much time and resources would be wasted.

Translating to a software example, this would be the same as building a complex web application. The goal is known, yet the first stab is a HTML page. This is followed by some simple sever side logic. On top of this we add an ORM. Further iterations thrash and push the code around. Early simple decisions start to come back to haunt us. This technical debt is either repaid or ignored. As further iterations follow the architecture of the application suffers. Through sheer force of determination the web application is complete. Usually there are many compromises along the way. Further enhancements or changes could be costly.


For unknown projects there are two solutions. First and foremost build a walking skeleton. Using the vehicle example, the first iteration of a known project should produce the frame of the car. Other than wheels there would be very little else here. However this is still a car, though limited in functionality and features. Using the software example this would be the core flow of the web app. Either hardcoded in places or built using scaffolding. You would still be embarrassed to release this. Architecturally you have all the core parts you need. The benefit of this is that future iterations simply build upon the good, known framework. The foundations of the project are stable. There is no fear that after several iterations you stumble upon a technical implementation issue.

The second solution is turn an unknown project into a known project. This sounds difficult but there is a remarkable easy way to achieve this - the subject of the next post.

Tuesday, 3 November 2015

Pre Computation

Caching is a common technique, especially with HTTP as it is made so easy. However pre computation is an alternative that can be used to reduce failures as well as speed up processing and response times.

Caching Example

Assume a list of countries to be displayed on the UI. These are often stored in one logical place, the database. A remote call is issued to query the database and return the results. The results are then manipulated and inserted into the UI. Repeat calls will then be cached for some period by the web server and/or proxy.

Pre Computed Example

As part of the build process have the same query performed, dynamically building up the result set. Using a templating language modify a base source file which simply inserts the dynamic result set. The end result of this would be a source file containing a collection of countries as if you had hardcoded the values. The difference is these values are pulled from a single source of truth as part of the pre build step.

In a statically compiled language you would have compile time safety after this file is generated. Regardless a simple suite of tests to ensure the collection is not empty or badly formed would be beneficial.

Once the deploy is complete all queries to retrieve the collection of countries would be performed by the pre computed collection. This technique works regardless of language due to the simplicity of storing a collection of items in a literal array or hashtable. For content that changes regularly you can use a separate content deploy which simply deploys any changes to content.

Pre computation works for even what appears to be dynamic content. Article submission sites, e-commerce or wikis could all be developed using pre computation.

Use punch outs for anything that varies based on user or context. Javascript is the natural choice for inserting this dynamic content. This advice flies in the face of much of the direction the modern web is heading. However the benefits of reduced remote calls, fast responses and less moving parts should not be under estimated.

Naturally pre computation will not work in areas where content is highly dynamic or specific to users. Single page applications, social media streams and the like are better suited to dynamic content cached where possible. Additionally adjusting a system to handle content deploys is not something that can be achieved lightly. As the build and deploy process must accommodate these changes, pre computation is usually required to be thought of up front or require some rework to introduce.