Skip to main content

How to write good manual tests?

When I first started out writing manual tests, I was really keen to do every step in minute detail and write out the expected result for every little step. In an ideal world where time is no constraint, resource is no constraint and updating the scripts is no constraint.. oh and the person writing the scripts didn't get bored.. then this would probably be the best way!

However, in the business world, we have all these constraints!!

The business want features yesterday, they want these features delivered with the minimal resource within the acceptable time frame and they want new features further down the line that make the old features redundant. 

With the above in mind, it's important to address, what makes up a good manual test? As these tests may or may not be around forever....

I think in order to answer the question above another question needs to be addressed, which is:

Is it going to form part of a manual regression test pack? Or is it throwaway?

If a test is going to be rerun and form a manual regression pack, then it is my opinion that it needs to be in as much detail as possible, and time taken to maintain these as I believe it's good practise to get new QA starters to run through a regression pack of what they are testing, which will help them get a feel for the application. The last thing you want is constant questions over whether this is correct behaviour or not (I am not moaning at new starters who do this, as it's genuine questions and the fault of the test if the test says otherwise)... It's also beneficial in a truly cross functional team, meaning that developers can run the tests without too much confusion.

For throwaway tests, I am a big fan of Given, When, Thens (Gherkin and BDD) or even Exploratory Testing (more on this in future posts). For example:

A Given When Then for searching for a Coming Soon product on a website

Whenever it's possible I like to write throwaway tests in this style, purely because it is meant to be run by someone who knows the application under test and it means that the test is quick and easy to write, quick and easy to update (if required) and can easily be parameterised. Of course there can be additional information in there (for example SQL scripts that may be needed to be run) if required, so long as each step is clearly defined, then I see no reason why you wouldn't want to write tests in this way (unless it is a complex end to end test, but then these types of tests tend to make it into some form of a regression pack anyway, so GWTs aren't really suitable for this form).

The project I am working on currently, isn't really new functionality that will need to be added to a regression pack, and as such, all of the tests that are being written are in this format, with attached data (SQL scripts etc.) if needed.

Another positive for GWTs is that they are easily automated, this can be achieved using SpecFlow, which binds the steps together and creates automated tests which can be run using a toolset of your choice (we use Selenium WebDriver for the WebUI automated tests currently - I will go over this in more detail in future posts).

Manual tests are on the decline,  when I first started in QA 5 years ago the vast majority of tests were manual,  however now with agile values,  continuous integration and better tooling more and more tests are becoming automated,  with what little manual testing that remains being covered by exploratory testing (again more on this in future posts).


  1. what do you mean by throwaway tests in terms of automation?

    1. Sorry, perhaps I wasn't 100% clear, I meant in throwaway manual tests, tests that aren't going to form a part of a regression pack, and aren't going to be used again. For instance, I recently worked on a project that had an aggressive golive date, and the business wanted it out quickly and it was just reworking existing functionality for a new language, so I created throwaway tests around them, in the format of GWTs. Minimal time/effort, but enough information to be able to let people know what is being tested and how.


Post a Comment

Popular posts from this blog

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager...
In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools.
Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block. 
One of the more popular tools is that of Microsoft Test Manager, it's big…

What is a PBI?

After my last post, I had the question of what is a PBI... so I thought i'd write a short blog post about what they are and why they are used.

A PBI is an acronym for Product Backlog Item. It is a description of a piece of work that your SCRUM team will develop and deliver. When you have a list of Product Backlog Items, you then refer to that collective list as a Product Backlog.

The product backlog is often prioritised and yourteam will work through each PBI, and release on a regular schedule... I am however going deep into the world of Agile development, which isn't entirely what this post is about, so I will stop myself now.

A Product Backlog Item is made up of the following:

Title - This is often a one liner that gives the team an idea of what the PBI is about, although it can just be an ID for the item and the team work off of that.

Description - Breaks down the PBI in a bit more detail, and can be written in any style, however I prefer it to be written as follows: 

By writin…

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds.

We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit.

        public static void AfterTestRun()
            var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());
            if (!nativeDriverQuit.Wait(TimeSpan.FromSeconds(10)))

        private s…