Skip to main content

Recursively Building a Web Service using the same Web Service

Back during the later part of 2011 there was a common theme occurring in our retrospectives each week. How can we replicate our live environment as close as possible?

We took steps to achieve this goal by creating a single machine image to ensure all our machines were configured correctly. Another quick win was to ensure certain aspects of our live data was restored to our local development databases during the night. This enabled us to take stack traces from our logs, and quite literally paste them into our IDE and replicate the users problem instantly. Without the same data set we could have seen different results. Despite these positive steps, there was a missing link in our replication process. How do we simulate the traffic of our live environment? As an example, we average anywhere from four to five thousand calculations per minute with our current web services, with our local and demo environment no where near this figure.

During 2011 I found myself involved in many deployments in which despite heavy testing I was uneasy. On our demo environments we could throw the same amount of load against our services, yet sometime after deploying our service would fall over. We would quickly have to revert and go back to the drawing board. The problem we had despite our traffic being mimicked in terms of volume was the load was not real. Our customers however have many more variations of requests that we were simply not predicting. The other obvious issue was during local development, the service may well handle the same volume of traffic, yet once live and the process has been running for a few hours - things might go bump. Factors such as memory or timeouts being the culprits here.

Collectively we had a few ideas on how to solve this. We looked into low level solutions such as directing traffic from IIS/apache towards other servers. We examined other load testing tools, and we even contemplated creating our own load creator. This internal tool would go over our database and fire off a number of requests at our demo environment. I felt uneasy with all these solutions. They were not "real" enough. I wanted the real time traffic to be submitted to our demo services, only then could we have full confidence in our work.

My idea was rather radical in the sense it was so easy, yet dangerous enough that it might just work. I proposed we integrated our own service, into itself. In other words, just before our service returns the results of the calculation, it takes the users request and submits it again, against our demo environment. The same service would be recursively submitting into itself. In order to ensure we did not affect the speed of the service, the submission is performed via an async call, meaning if this second call was to die the live service would be unaffected. The obvious downside here was that in order to test this, we needed to deploy the changes to our live service. This was achieved via a feature toggle, meaning at any time we could turn the feature on or off without affecting any customers.

The end result of this was that when the feature is enabled, the traffic on our live service is sent to our demo service. This allows us to deploy experimental or new features and changes to the demo environment and check them under real load, with real time data. If all goes well after a period of time we can deploy to our live service, if not we roll back and no one is the wiser.

Comments

Popular posts from this blog

Three Steps to Code Quality via TDD

Common complaints and problems that I've both encountered and hear other developers raise when it comes to the practice of Test Driven Development are: Impossible to refactor without all the tests breakingMinor changes require hours of changes to test codeTest setup is huge, slow to write and difficult to understandThe use of test doubles (mocks, stubs and fakes is confusing)Over the next three posts I will demonstrate three easy steps that can resolve the problems above. In turn this will allow developers to gain one of the benefits that TDD promises - the ability to refactor your code mercifully in order to improve code quality.StepsStop Making Everything PublicLimit the Amount of Dependencies you Use A Unit is Not Always a Method or ClassCode quality is a tricky subject and highly subjective, however if you follow the three guidelines above you should have the ability to radically change implementation details and therefore improve code quality when needed.

DRY vs DAMP in Tests

In the previous post I mentioned that duplication in tests is not always bad. Sometimes duplication becomes a problem. Tests can become large or virtually identically excluding a few lines. Changes to these tests can take a while and increase the maintenance overhead. At this point, DRY violations need to be resolved.SolutionsTest HelpersA common solution is to extract common functionality into setup methods or other helper utilities. While this will remove and reduce duplication this can make tests a bit harder to read as the test is now split amongst unrelated components. There is a limit to how useful such extractions can help as each test may need to do something slightly differently.DAMP - Descriptive and Meaningful PhrasesDescriptive and Meaningful Phrases is the alter ego of DRY. DAMP tests often use the builder pattern to construct the System Under Test. This allows calls to be chained in a fluent API style, similar to the Page Object Pattern. Internally the implementation wil…

Coding In the Real World

As a student when confronted with a problem, I would end up coding it and thinking - how do the professionals do this?For some reason I had the impression that once I entered the industry I would find enlightenment. Discovering the one true way to write high quality, professional code.It turns out that code in industry is not too far removed from the code I was writing back when I knew very little.Code in the real world can be:messy or cleanhard or easy to understandsimple or complexeasy or hard to changeor any combination of the aboveVery rarely will you be confronted with a problem that is difficult. Most challenges typically are formed around individuals and processes, rather than day to day coding. Years later I finally have the answer. Code in the real world is not that much different to code we were all writing when we first started out.If I could offer myself some advice back in those early days it would be to follow KISS, YAGNI and DRY religiously. The rest will fall into plac…

Feature Toggles

I'm a fan of regular releasing. My background and experience leads me to release as regularly as possible. There are numerous benefits to regular releases; limited risk, slicker release processes and the ability to change as requirements evolve.The problem with this concept is how can you release when features are not functionally complete?SolutionIf there is still work in progress, one solution to allow frequent releases is to use feature toggles. Feature toggles are simple conditional statements that are either enabled or disabled based on some condition.This simple example shows a feature toggle for an "Edit User" feature. If the boolean condition is false, then we only show the "New User" feature and the "Admin" feature. This boolean value will be provided by various means, usually a configuration file. This means at certain points we can change this value in order to demonstrate the "Edit User" functionality. Our demo environment could …

Reused Abstraction Principle

This is the second part of my series on abstractions.Part 1 - AbstractionsPart 3 - Dependency Elimination PrincipleThe Reused Abstraction Principle is a simple in concept in practice, but oddly rarely followed in typical enterprise development. I myself have been incredibly guilty of this in the past.Most code bases have a 1:1 mapping of interfaces to implementations. Usually this is the sign of TDD or automated testing being applied badly. The majority of these interfaces are wrong. 1:1 mappings between interfaces and implementations is a code smell.Such situations are usually the result of extracting an interface from an implementation, rather than having the client drive behaviour.These interfaces are also often bad abstractions, known as "leaky abstractions". As I've discussed previously, these abstractions tend to offer nothing more than simple indirection.ExampleApply the "rule of three". If there is only ever one implementation, then you don't need …