Skip to main content

Writing Test Plans

Firstly, let me remind you of the following scene in There's Something About Mary:

Hitchhiker: You heard of this thing, the 8-Minute Abs?  
Ted: Yeah, sure, 8-Minute Abs. Yeah, the exercise video. 
Hitchhiker: Yeah, this is going to blow that right out of the water. Listen to this: 7... Minute... Abs.  
Ted: Right. Yes. OK, all right. I see where you're going.  
Hitchhiker: Think about it. You walk into a video store, you see 8-Minute Abs sittin' there, there's 7-Minute Abs right beside it. Which one are you gonna pick, man?  
Ted: I would go for the 7. 
The quicker something can get done the better right? Within reason of course....

On any project that involves some form of testing, you as a QA will be required to write some form of a test plan document.  Everywhere I've been has done this slightly differently,  but the premise has been the same to let people know what it is that is going to be tested for this project.

When doing a test plan up front,  they can be invaluable, although I question the amount of time one should spend on writing a test plan, I do not see the value in writing a test plan on a per project basis, which I have done in the past, and still do today if that is what is required. In my opinion too much time is spent on writing test plans, that are often looked at once and forgotten about.

I much prefer and see more value in writing it on a per release basis, as you know more about what will be tested, and each release will have different implications on what needs to be tested. On a recent project, we had one release that would possibly have an impact on the performance, so we performed performance testing for that release, if we have a test plan that is for the entire project then this could easily get confused, as performance testing was only performed for that release. 

Another thing is that test plans should take no more than 20 minutes to write (even less if the release is small), they shouldn't be long documents, they should be a simple list of what is being released and what it is that needs to be tested for that release from a high level listing any risks in the process, if any more detail is required, then the test cases can be looked at to give a clearer picture. This also makes updating the Test Plan easier if a new feature is added/scope is increased.

So if we spend 15 minutes on a test plan on a per release basis, then you're probably spending just as much time (if not less) as you would if you were to write it on a project basis, only it's far more valuable as it's more relevant to whatever it is that is being worked on. 

As an interesting side note (and a driver for this discussion), in How Google Tests Software they discuss test plans, and how they put a group of testers in a room and said they have 10 minutes to come up with a test plan for a certain project. In those 10 minutes, they came up with just as much as if they had prepared a long formal Test Plan, but instead of filling it with unnecessary words, they had bullet points or tables of the information that is needed for the test plan.  This is definitely something that I wish to try one day!

Watch out for my next blog post.. the 9 minute test plan ;)


  1. What about Master Test Plan surely thats why some banks prefer them. List the test cases /test area.

    Or just an agile test plan, a 2-3 page document.

  2. Is this test plan done per release or per story? I asking this because sometimes a release in my current company consists of more than 10-15 stories releases (1-2 month cycle). And our requirements keep changing so much so frequently that it is almost impossible to plan something before hand for an entire release, but we do a very high level test plan (very informal and whenever necessary) before testing a story.

    Also, further more, if a story is sub divided into individual tasks (which is usually the case in my current company), Eg: new functional area has to be done, shared by 3 dev guys where each of them is working on different functional area like, database sproc, backend task, front end here all of them are working on individual tasks, so in this case I need to have a plan for each of these tasks separately because they will be tested separately. But we also need to do a regression for the story before closing it or pushing it to UAT and make another test plan to make sure they all come together.

    I am confused about test plan now. :)

    Any thoughts?

    1. I'm talking about a high level test plan per release, even more important when you are doing smaller releases, and if a test plan is to take 10 minutes, then you're not really wasting time in doing a test plan.

      A test plan should be open to change as stories progress, however, changes in story "shouldn't" change a test plan too much, this will be reflected in the test cases.

      It's important to not get bogged down by updating test plans, updating test cases etc. when there often isnt much business value, providing there is a test plan in place that details most of what you wish to achieve, then I personally do not see the point in updating the test plan, in my eyes the test cases form a huge part of the test plan, so to update it in 2 places is a waste of time and effort.

      In your specific examples, I would have a test plan per release as mentioned, test cases will be created for each story, and these will form a part of the test plan, although they won't be referenced in it directly (as then changes to the test cases, will mean updating the test plan). The test plan should highlight any regression that is to be carried out, if performance testing is needed, what are the high level areas of impact, that kind of thing.

      I think I may have rambled a bit, but I hope it's a bit clearer.... :)

    2. makes complete sense! thanks

  3. I am trying to write test estimate document. We don't story point before the start of release or at any stage at all (not ideal I know!). I am now making simple document(feature and time columns on excel sheet) which will help business understand how much time it takes even if they change a single line of code based on complexity of features.

    Because dev's throw things at test at the last minute and are not everyone is sure when they will get done, uncertainty if they will be reopened, etc

    I was searching on the internet for answers and found few useful ones.

    What I am trying to get to here is, I think this should all be part of the test plan in general as well. Additionally we can discuss briefly when should be stop testing (define them clearly for new features only).

    What is your experience is doing estimates based on features, can you please tell the best approach and share your thoughts?


  4. The new systems and thoughts dependably opens the psyche and strengthen us to improve this reasoning opens the new entryways and concoct more predominant and commendable methods and revelations but we have website that's good for all of us.


Post a Comment

Popular posts from this blog

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager...
In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools.
Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block. 
One of the more popular tools is that of Microsoft Test Manager, it's big…

What is a PBI?

After my last post, I had the question of what is a PBI... so I thought i'd write a short blog post about what they are and why they are used.

A PBI is an acronym for Product Backlog Item. It is a description of a piece of work that your SCRUM team will develop and deliver. When you have a list of Product Backlog Items, you then refer to that collective list as a Product Backlog.

The product backlog is often prioritised and yourteam will work through each PBI, and release on a regular schedule... I am however going deep into the world of Agile development, which isn't entirely what this post is about, so I will stop myself now.

A Product Backlog Item is made up of the following:

Title - This is often a one liner that gives the team an idea of what the PBI is about, although it can just be an ID for the item and the team work off of that.

Description - Breaks down the PBI in a bit more detail, and can be written in any style, however I prefer it to be written as follows: 

By writin…

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds.

We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit.

        public static void AfterTestRun()
            var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());
            if (!nativeDriverQuit.Wait(TimeSpan.FromSeconds(10)))

        private s…