Skip to main content

The QA Test Matrix

Historically teams I've worked with have taken a few varying approaches when designing tests against acceptance criteria. One is to have the business define the feature, while the team help define the acceptance criteria. Ultimately the business gets the final say if they agree, and further acceptance criteria is either added or removed. The strength of this approach is everyone is involved with the process so nothing is missed or misunderstood. The biggest flaw with this style is that the documentation produced is often verbose, using wordy Given-When-Then scenarios. Using this plan a test plan is then created, mapping tests to acceptance criteria.

An alternative approach is have the business define both the feature and acceptance criteria while the team come up with a corresponding test strategy. This more technical approach allows for a separation of testing activities and test categories. Finally the test plan is replayed back to the business and correlated against acceptance criteria. A negative of this approach is not everyone is involved with the task at the same time. This means there can be some disconnect with what the business is actually asking for. Both approaches work though they can yield mixed results on a case by case basis.

The QA Matrix

I've recently been introduced to the concept of a testing/QA matrix, which is a far more condensed and simplified solution. It has the benefit of the whole team being engaged, while producing nothing more than a simple table that can fit comfortably on a A4 page. The left hand column includes each condition of acceptance, while the other columns should have a mark to indicate the type of test that will cover this functionality. An example is below.

     Unit Integration Acceptance Contract Manual
COA           X                             X
COA   X    

The beauty of this matrix is that at a glance you can see where you testing efforts lie. If too much occurs on the right of the matrix you may need to re-consider and question your approach. Is there a way to limit the more expensive style of tests and still gain confidence? Other questions can arise around test coverage and whether higher level tests are needed.

When producing this matrix the whole team including the business should be involved. By having everyone together, decisions can be made quickly with everyone in agreement. Additionally it allows debate and discussion around how each feature should be tested.

For higher level tests these can be directly translated into automated tests. While the lower level tests need to confirmed at a later date once the code is complete.

Along side the QA matrix it may be worth while adding a simple diagram of the components that will be involved such as web servers, databases and so on. This can aid discussion and highlight hot spots for changes or tests.

Finally for demonstration to the business the matrix can be used as a form contract for signing off functionality. Once the feature is complete it is simply a case of finding the corresponding tests, confirming their existence and making a note of the commit that included them.


  1. Hi mate,

    Excellent piece. My only contention is that the matrix is useful in providing an overarching high-level view of the *possible* tests that are determined *at that time* - deeper analysis may result in more tests further down the line, particularly when actually sitting down and pairing to develop the tests (i.e. developing using a test first approach). This means that the matrix will therefore provide an inaccurate reflection of where the coverage lies as the features translate into code.

    However, as an instrument to drive the discussion, I agree, its pretty useful.


    1. Thanks

      Yeah I completely agree, great point. Using the matrix afterwards is really a guide. Additional coverage could easily be added or removed during development. In terms of conversation and discussion around testing approach this has been excellent like you said.


Post a Comment

Popular posts from this blog

Three Steps to Code Quality via TDD

Common complaints and problems that I've both encountered and hear other developers raise when it comes to the practice of Test Driven Development are: Impossible to refactor without all the tests breakingMinor changes require hours of changes to test codeTest setup is huge, slow to write and difficult to understandThe use of test doubles (mocks, stubs and fakes is confusing)Over the next three posts I will demonstrate three easy steps that can resolve the problems above. In turn this will allow developers to gain one of the benefits that TDD promises - the ability to refactor your code mercifully in order to improve code quality.StepsStop Making Everything PublicLimit the Amount of Dependencies you Use A Unit is Not Always a Method or ClassCode quality is a tricky subject and highly subjective, however if you follow the three guidelines above you should have the ability to radically change implementation details and therefore improve code quality when needed.

DRY vs DAMP in Tests

In the previous post I mentioned that duplication in tests is not always bad. Sometimes duplication becomes a problem. Tests can become large or virtually identically excluding a few lines. Changes to these tests can take a while and increase the maintenance overhead. At this point, DRY violations need to be resolved.SolutionsTest HelpersA common solution is to extract common functionality into setup methods or other helper utilities. While this will remove and reduce duplication this can make tests a bit harder to read as the test is now split amongst unrelated components. There is a limit to how useful such extractions can help as each test may need to do something slightly differently.DAMP - Descriptive and Meaningful PhrasesDescriptive and Meaningful Phrases is the alter ego of DRY. DAMP tests often use the builder pattern to construct the System Under Test. This allows calls to be chained in a fluent API style, similar to the Page Object Pattern. Internally the implementation wil…

Coding In the Real World

As a student when confronted with a problem, I would end up coding it and thinking - how do the professionals do this?For some reason I had the impression that once I entered the industry I would find enlightenment. Discovering the one true way to write high quality, professional code.It turns out that code in industry is not too far removed from the code I was writing back when I knew very little.Code in the real world can be:messy or cleanhard or easy to understandsimple or complexeasy or hard to changeor any combination of the aboveVery rarely will you be confronted with a problem that is difficult. Most challenges typically are formed around individuals and processes, rather than day to day coding. Years later I finally have the answer. Code in the real world is not that much different to code we were all writing when we first started out.If I could offer myself some advice back in those early days it would be to follow KISS, YAGNI and DRY religiously. The rest will fall into plac…

Feature Toggles

I'm a fan of regular releasing. My background and experience leads me to release as regularly as possible. There are numerous benefits to regular releases; limited risk, slicker release processes and the ability to change as requirements evolve.The problem with this concept is how can you release when features are not functionally complete?SolutionIf there is still work in progress, one solution to allow frequent releases is to use feature toggles. Feature toggles are simple conditional statements that are either enabled or disabled based on some condition.This simple example shows a feature toggle for an "Edit User" feature. If the boolean condition is false, then we only show the "New User" feature and the "Admin" feature. This boolean value will be provided by various means, usually a configuration file. This means at certain points we can change this value in order to demonstrate the "Edit User" functionality. Our demo environment could …

Reused Abstraction Principle

This is the second part of my series on abstractions.Part 1 - AbstractionsPart 3 - Dependency Elimination PrincipleThe Reused Abstraction Principle is a simple in concept in practice, but oddly rarely followed in typical enterprise development. I myself have been incredibly guilty of this in the past.Most code bases have a 1:1 mapping of interfaces to implementations. Usually this is the sign of TDD or automated testing being applied badly. The majority of these interfaces are wrong. 1:1 mappings between interfaces and implementations is a code smell.Such situations are usually the result of extracting an interface from an implementation, rather than having the client drive behaviour.These interfaces are also often bad abstractions, known as "leaky abstractions". As I've discussed previously, these abstractions tend to offer nothing more than simple indirection.ExampleApply the "rule of three". If there is only ever one implementation, then you don't need …