Skip to main content

How to decide what and when to automate tests?

We all know that repetitive manual testing can be and is at times boring.... but unfortunately it's a necessity for some aspects of testing.

One thing that I love, and sure enough this reduces the load of manual testing, is automated testing, be it from the Service level, through an API and especially WebUI testing. Whenever any testing comes along, there is a question that is regularly asked by QA, do we want to automate this?

When deciding what tests we should automate I feel that it's important to answer some questions:
  • Will this test form a part of the regression pack for the application?
  • Will this test be run multiple times during the development process?
  • Can the same level of testing be achieved by automating this test?
I'll tackle the first question, as it's the most basic and the easiest to answer. If a test is to form a part of a regression pack, then yes it should be automated. The reason being that it will save time in the future, and will offer more assurance in releasing future software releases.

As for the second question, if a test is to be run multiple times then surely it makes sense to automate it and reduce the amount of effort it takes to run. This is especially interesting when it comes to a test to check if a bug has been fixed, in order to verify that subsequent builds do not reintroduce this bug into the wild, then by all means automate it, and run it as part of the build process (if at all possible) to try and catch these issues.

Finally, there are some aspects of manual testing that cannot be automated, for instance, checking the location of an element on a web page (for Web UI testing), whilst the naked eye can easily notice if an element is misplaced, or if something is appearing incorrectly (i.e. text overlapping other text). Because of this, I tend to shy away from automated testing for cross browser testing at the moment... However...

Google have an interesting piece of software that monitors the top 1000 pages in search results when testing new versions of chrome, they will detect if there are any variations between the version under test or previous versions, and email developers to let them know. It is even so clever to include knowledge of where there may be dynamic content, like on for instance where news articles are ever changing and dynamically creating the front page. Whilst I understand that something like that is extremely complex and possibly overkill for some applications, it is an extremely impressive piece of software that I would love to one day see in action!

So whilst automation is an extremely effective toolset to have, there will always have to be some element of manual testing to go along with the automation. Now this manual testing, doesn't have to be scripted, far from it, it can be in the form of Exploratory Testing (more about in future posts). As time goes forward, I am sure there will be more effective ways of performing cross browser testing and ensuring elements are displayed on the front end.  This doesn't hinder the effectiveness of automating tests at a service level or even an API level, as the response and requests for these are structured in a way that isn't going to change over time, and so I find that you can achieve 100% coverage on Service and API level tests in an automation test suite.

There is also the benefit of the time saved by automating a test can be spent tackling more important and complex test cases during the development of an application. So not only does it help with reducing testing effort on regression, it increases the effectiveness of the testing effort going forward. 

It also lends itself to application ownership, which is often only seen during the development of the application, but in reality should be whilst the application is being used, as these tests will live for the lifecycle of the product.

For this post we have only really talked about Acceptance Tests, in future we will discuss the importance of Unit Tests and Integration Tests.


Post a Comment

Popular posts from this blog

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager...
In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools.
Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block. 
One of the more popular tools is that of Microsoft Test Manager, it's big…

What is a PBI?

After my last post, I had the question of what is a PBI... so I thought i'd write a short blog post about what they are and why they are used.

A PBI is an acronym for Product Backlog Item. It is a description of a piece of work that your SCRUM team will develop and deliver. When you have a list of Product Backlog Items, you then refer to that collective list as a Product Backlog.

The product backlog is often prioritised and yourteam will work through each PBI, and release on a regular schedule... I am however going deep into the world of Agile development, which isn't entirely what this post is about, so I will stop myself now.

A Product Backlog Item is made up of the following:

Title - This is often a one liner that gives the team an idea of what the PBI is about, although it can just be an ID for the item and the team work off of that.

Description - Breaks down the PBI in a bit more detail, and can be written in any style, however I prefer it to be written as follows: 

By writin…

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds.

We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit.

        public static void AfterTestRun()
            var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());
            if (!nativeDriverQuit.Wait(TimeSpan.FromSeconds(10)))

        private s…