Monday, 11 March 2013

Writing Test Plans

Firstly, let me remind you of the following scene in There's Something About Mary:

Hitchhiker: You heard of this thing, the 8-Minute Abs?  
Ted: Yeah, sure, 8-Minute Abs. Yeah, the exercise video. 
Hitchhiker: Yeah, this is going to blow that right out of the water. Listen to this: 7... Minute... Abs.  
Ted: Right. Yes. OK, all right. I see where you're going.  
Hitchhiker: Think about it. You walk into a video store, you see 8-Minute Abs sittin' there, there's 7-Minute Abs right beside it. Which one are you gonna pick, man?  
Ted: I would go for the 7. 
The quicker something can get done the better right? Within reason of course....

On any project that involves some form of testing, you as a QA will be required to write some form of a test plan document.  Everywhere I've been has done this slightly differently,  but the premise has been the same to let people know what it is that is going to be tested for this project.

When doing a test plan up front,  they can be invaluable, although I question the amount of time one should spend on writing a test plan, I do not see the value in writing a test plan on a per project basis, which I have done in the past, and still do today if that is what is required. In my opinion too much time is spent on writing test plans, that are often looked at once and forgotten about.

I much prefer and see more value in writing it on a per release basis, as you know more about what will be tested, and each release will have different implications on what needs to be tested. On a recent project, we had one release that would possibly have an impact on the performance, so we performed performance testing for that release, if we have a test plan that is for the entire project then this could easily get confused, as performance testing was only performed for that release. 

Another thing is that test plans should take no more than 20 minutes to write (even less if the release is small), they shouldn't be long documents, they should be a simple list of what is being released and what it is that needs to be tested for that release from a high level listing any risks in the process, if any more detail is required, then the test cases can be looked at to give a clearer picture. This also makes updating the Test Plan easier if a new feature is added/scope is increased.

So if we spend 15 minutes on a test plan on a per release basis, then you're probably spending just as much time (if not less) as you would if you were to write it on a project basis, only it's far more valuable as it's more relevant to whatever it is that is being worked on. 

As an interesting side note (and a driver for this discussion), in How Google Tests Software they discuss test plans, and how they put a group of testers in a room and said they have 10 minutes to come up with a test plan for a certain project. In those 10 minutes, they came up with just as much as if they had prepared a long formal Test Plan, but instead of filling it with unnecessary words, they had bullet points or tables of the information that is needed for the test plan.  This is definitely something that I wish to try one day!

Watch out for my next blog post.. the 9 minute test plan ;)


  1. What about Master Test Plan surely thats why some banks prefer them. List the test cases /test area.

    Or just an agile test plan, a 2-3 page document.

  2. Is this test plan done per release or per story? I asking this because sometimes a release in my current company consists of more than 10-15 stories releases (1-2 month cycle). And our requirements keep changing so much so frequently that it is almost impossible to plan something before hand for an entire release, but we do a very high level test plan (very informal and whenever necessary) before testing a story.

    Also, further more, if a story is sub divided into individual tasks (which is usually the case in my current company), Eg: new functional area has to be done, shared by 3 dev guys where each of them is working on different functional area like, database sproc, backend task, front end here all of them are working on individual tasks, so in this case I need to have a plan for each of these tasks separately because they will be tested separately. But we also need to do a regression for the story before closing it or pushing it to UAT and make another test plan to make sure they all come together.

    I am confused about test plan now. :)

    Any thoughts?

    1. I'm talking about a high level test plan per release, even more important when you are doing smaller releases, and if a test plan is to take 10 minutes, then you're not really wasting time in doing a test plan.

      A test plan should be open to change as stories progress, however, changes in story "shouldn't" change a test plan too much, this will be reflected in the test cases.

      It's important to not get bogged down by updating test plans, updating test cases etc. when there often isnt much business value, providing there is a test plan in place that details most of what you wish to achieve, then I personally do not see the point in updating the test plan, in my eyes the test cases form a huge part of the test plan, so to update it in 2 places is a waste of time and effort.

      In your specific examples, I would have a test plan per release as mentioned, test cases will be created for each story, and these will form a part of the test plan, although they won't be referenced in it directly (as then changes to the test cases, will mean updating the test plan). The test plan should highlight any regression that is to be carried out, if performance testing is needed, what are the high level areas of impact, that kind of thing.

      I think I may have rambled a bit, but I hope it's a bit clearer.... :)

    2. makes complete sense! thanks

  3. I am trying to write test estimate document. We don't story point before the start of release or at any stage at all (not ideal I know!). I am now making simple document(feature and time columns on excel sheet) which will help business understand how much time it takes even if they change a single line of code based on complexity of features.

    Because dev's throw things at test at the last minute and are not everyone is sure when they will get done, uncertainty if they will be reopened, etc

    I was searching on the internet for answers and found few useful ones.

    What I am trying to get to here is, I think this should all be part of the test plan in general as well. Additionally we can discuss briefly when should be stop testing (define them clearly for new features only).

    What is your experience is doing estimates based on features, can you please tell the best approach and share your thoughts?