Skip to main content

Small change? Test everything!

As a QA our job is to ensure quality, however, all too often I hear about a small change, and the testing that a QA has said is needed is massive. I feel that QAs have a tendency to say to test everything when they don't necessarily understand the change, when with a few questions we can isolate the change down to a specific system and come up with an appropriate 10 minute test strategy.

Unfortunately, I think this comes out a lot as the QA is scared to ask exactly what the change is, what the affected systems are, and in all honesty no one should be afraid to ask if they don't understand anything. On the flip side, whoever you ask, you shouldn't take their response as gospel, do some investigation work yourself until you fully understand the risks and the affects this change will have.

I've experienced a number of scenarios where I've questioned the amount of testing or the type of testing that is being completed on a task. For example, a database change will very rarely (if ever) require cross browser testing, or a small change (ie. adding a link to a page) in one part of a system will not require regression testing of the whole system, but all too often this is something that is done and not challenged enough unfortunately.

There needs to be clear lines about what is in scope for testing and what is out of scope, and the risks that are associated with not testing (if any). The risk of not testing a database change cross browser is negligible (depending on the change of course).

Alas, I am not just talking about functional testing, a lot of the time, non functional testing, such as performance testing is performed when it's not necessary. To rectify we need to talk to other stakeholders, talk to developers, don't be afraid to not understand something at first, only through asking questions will we learn.

What can we do to rectify this?

We as QA need to be a lot smarter about what we test and how we test, or else we will get a bad reputation for being slow workers, for not testing efficiently, or get labelled as inconsistent. If one person says to do it this way, and another person on another team says to test something similar in another way, it makes everyone look bad. As a QA department consistency across teams will improve the perception of QA across the IT department, and I've mentioned before this is an area that is often lacking, be it rightly or wrongly, and we should do all we can to improve this.


  1. Lol, well said. The tendency to want to test everything without meaningful justification or quantifying the risk that may be introduced by the change. In my view its lack of leadership and frankly individuals choosing to take the path of least ressistance, also lack of strong leadership in a lot of QA departments. Frankly QA's are the butt of way too many jokes :-) "reputation for being slow workers, for not testing efficiently, or get labelled as inconsistent". Its easy to see why people think QA are lazy slow or even thick. :-) But there are exceptions out there.....

    1. I also think a lot of it is down to things like I said in my previous post that 50% of qa people shouldn't really be in QA, and that there can be a very big fear of change when systems are tightly coupled it's difficult to fully understand the implications when no one else on your team fully understand them. It would be lovely if systems weren't so coupled together, would make testing more straight forward and releases easy! One can dream right!?

  2. This is where developers should take the responsibly of ensuring that the change doesn't break any existing unit tests (if any have been written). If no unit tests exist then, at least ensure they write some for the change. More often than not the answer from developers is that it's not possible to unit test the change, which is a poor excuse.

    I agree, that you need time set aside to asses the change and then figure out what knock on effects the change has on other systems.

    You also need to identity if the change affects core high risk areas of the system, such as payment changes or product selection (add to basket). That will more often than not dictate the depth of your testing effort.

    I'm a big fan of automated tests, which if exist will cover a broad area of your bases.
    Lastly, performance tests are often forgotten and I've seen releases rolled back due to changes that have bought the site to a standstill because there hasn't been any planning around ensuring system performance.

    1. I don't think it should only come from developers, we as a QA team need to ask and do some investigations ourselves, walk through the changes and the systems affected, that should help drive the test plan. We shouldn't just accept test everything from developers (a mistake I have made in the past).

      I like the high risk area approach, as all too often testing is squeezed, by identifying the high risk areas we can focus our testing effort on those when appropriate.

      Performance tests are often forgotten about, but they are also just bolted on to projects willy nilly, oh and sometimes the results of which are ignored!! :p


Post a Comment

Popular posts from this blog

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager...
In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools.
Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block. 
One of the more popular tools is that of Microsoft Test Manager, it's big…

What is a PBI?

After my last post, I had the question of what is a PBI... so I thought i'd write a short blog post about what they are and why they are used.

A PBI is an acronym for Product Backlog Item. It is a description of a piece of work that your SCRUM team will develop and deliver. When you have a list of Product Backlog Items, you then refer to that collective list as a Product Backlog.

The product backlog is often prioritised and yourteam will work through each PBI, and release on a regular schedule... I am however going deep into the world of Agile development, which isn't entirely what this post is about, so I will stop myself now.

A Product Backlog Item is made up of the following:

Title - This is often a one liner that gives the team an idea of what the PBI is about, although it can just be an ID for the item and the team work off of that.

Description - Breaks down the PBI in a bit more detail, and can be written in any style, however I prefer it to be written as follows: 

By writin…

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds.

We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit.

        public static void AfterTestRun()
            var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());
            if (!nativeDriverQuit.Wait(TimeSpan.FromSeconds(10)))

        private s…