Thursday, 11 April 2013

Small change? Test everything!

As a QA our job is to ensure quality, however, all too often I hear about a small change, and the testing that a QA has said is needed is massive. I feel that QAs have a tendency to say to test everything when they don't necessarily understand the change, when with a few questions we can isolate the change down to a specific system and come up with an appropriate 10 minute test strategy.

Unfortunately, I think this comes out a lot as the QA is scared to ask exactly what the change is, what the affected systems are, and in all honesty no one should be afraid to ask if they don't understand anything. On the flip side, whoever you ask, you shouldn't take their response as gospel, do some investigation work yourself until you fully understand the risks and the affects this change will have.

I've experienced a number of scenarios where I've questioned the amount of testing or the type of testing that is being completed on a task. For example, a database change will very rarely (if ever) require cross browser testing, or a small change (ie. adding a link to a page) in one part of a system will not require regression testing of the whole system, but all too often this is something that is done and not challenged enough unfortunately.

There needs to be clear lines about what is in scope for testing and what is out of scope, and the risks that are associated with not testing (if any). The risk of not testing a database change cross browser is negligible (depending on the change of course).

Alas, I am not just talking about functional testing, a lot of the time, non functional testing, such as performance testing is performed when it's not necessary. To rectify we need to talk to other stakeholders, talk to developers, don't be afraid to not understand something at first, only through asking questions will we learn.

What can we do to rectify this?

We as QA need to be a lot smarter about what we test and how we test, or else we will get a bad reputation for being slow workers, for not testing efficiently, or get labelled as inconsistent. If one person says to do it this way, and another person on another team says to test something similar in another way, it makes everyone look bad. As a QA department consistency across teams will improve the perception of QA across the IT department, and I've mentioned before this is an area that is often lacking, be it rightly or wrongly, and we should do all we can to improve this.


  1. Lol, well said. The tendency to want to test everything without meaningful justification or quantifying the risk that may be introduced by the change. In my view its lack of leadership and frankly individuals choosing to take the path of least ressistance, also lack of strong leadership in a lot of QA departments. Frankly QA's are the butt of way too many jokes :-) "reputation for being slow workers, for not testing efficiently, or get labelled as inconsistent". Its easy to see why people think QA are lazy slow or even thick. :-) But there are exceptions out there.....

    1. I also think a lot of it is down to things like I said in my previous post that 50% of qa people shouldn't really be in QA, and that there can be a very big fear of change when systems are tightly coupled it's difficult to fully understand the implications when no one else on your team fully understand them. It would be lovely if systems weren't so coupled together, would make testing more straight forward and releases easy! One can dream right!?

  2. This is where developers should take the responsibly of ensuring that the change doesn't break any existing unit tests (if any have been written). If no unit tests exist then, at least ensure they write some for the change. More often than not the answer from developers is that it's not possible to unit test the change, which is a poor excuse.

    I agree, that you need time set aside to asses the change and then figure out what knock on effects the change has on other systems.

    You also need to identity if the change affects core high risk areas of the system, such as payment changes or product selection (add to basket). That will more often than not dictate the depth of your testing effort.

    I'm a big fan of automated tests, which if exist will cover a broad area of your bases.
    Lastly, performance tests are often forgotten and I've seen releases rolled back due to changes that have bought the site to a standstill because there hasn't been any planning around ensuring system performance.

    1. I don't think it should only come from developers, we as a QA team need to ask and do some investigations ourselves, walk through the changes and the systems affected, that should help drive the test plan. We shouldn't just accept test everything from developers (a mistake I have made in the past).

      I like the high risk area approach, as all too often testing is squeezed, by identifying the high risk areas we can focus our testing effort on those when appropriate.

      Performance tests are often forgotten about, but they are also just bolted on to projects willy nilly, oh and sometimes the results of which are ignored!! :p