Skip to main content

Exploratory Testing - Charters

I recently read a presentation on exploratory testing by +Elisabeth Hendrickson on exploratory testing, and it came with an interesting solution to Exploratory Testing, one that offers enough guidance, but isn't a test case. The slides are available here.

I'll take the pieces that I strongly agree with, and discuss them here.

Firstly, defining what exploratory testing is. This is very important, I believe that a number of people view exploratory testing as just randomly using the application under test and trying to break it in any manner possible when in actual fact it should be much more than that, in order to actually be useful. It should be:
  • Targeted
  • Structured
  • Well defined
In order to execute some exploratory testing, I think there needs to be some knowledge of the system under test, how it should behave, and the dependencies of the systems. The tester learns about the system as they begin testing, this information can be fed into the next test, and so on. The difficulty comes in reviewing what is being tested, however, I think this can be achieved with chartered exploratory tests.

I first read about Chartered exploratory tests in the slides that I have linked to above, I really liked the idea behind them, and to me they really make sense as an effective and time efficient manner in testing a system.

To summarise (for those who haven't read the slides above), a chartered exploratory test is made up of the following:

Explore - this is what is being explored under test
With - This is what is going to be used to test the item that is being explored
To Discover - What it is that the test is attempting to find out about the item under test

And now, for an example, we have a PBI that has come in, that means a change to the jQuery library. Each page of the website requires retesting. So for example, the product page (it's the asos website that's under test):

Explore the Product Picture widget on the Product Page with the error console open
With general usage 
To discover any java-script errors are displayed

Explore the Add To Bag and Save For Later functionality on the Product Page with the error console open
With different combinations of Size/colour
To discover any java-script errors are displayed

Explore the recommendations functionality on the Product Page with the error console open
With general usage 
To discover any java-script errors are displayed

I'm sure you can immediately see the benefits of this, as opposed to just:

Exploratory testing of the product page.

As the tester knows that they have to interact with the widgets  with the error console open, otherwise the widgets might behave to the user as they should, but be throwing javascript errors in the background, which the error console will log, and which might have been missed if there was no charter.

This gives the user enough guidance, to generally use the product page, things like those that are highlighted below:

So things like Add To Bag, The Search, Any scroll that uses Javascript, The pictures and the Zoom functionality.

I would feel pretty confident that exploratory testing that is completed using the above charter would catch any Javascript Error that might be hiding in the page.

This, I think you'll agree is far quicker, and more efficient than writing a manual test that tests all of the above.

In designing Charters for Exploratory Testing, you still need to sit down with developers and come up with what the changes are, anything that is likely to break, and then think outside that box in order to come up with a targeted, structured and well defined charter.

I do however think that a new tester could come in and immediately pick up the charters and do their testing around them, with little background knowledge.


  1. Is there a easy way of documenting these kind of exploratory tests on the fly? and possibly include them in test case documents. I am guessing the answer is use specflow, but that could mean overkill because usually these tests are done quickly one after another. Any thoughts?

    1. I would suggest just create them as normal test cases in whatever Test Case Management tool you use (i'm using Microsoft Test Manager currently). This makes it easy to pass/fail the test, and raise bugs against the test case, obviously if you raise a bug it's important to put the steps to recreate in the bug, as the test case itself wouldn't specify exactly what is needed to recreate the bug.

    2. Ok. I have not used any sort of test management tool before at all. I Still write test cases in excel sheets. Can you write a post in the future explaining benefits of using one? :)

    3. Sounds like a very good idea! I'll let you know when it's done!

  2. After reading thru your post on "value of certifications" I enjoyed reading this article. I have been practicing exploratory testing for about 5 years now and I would definitely recommend it to any tester.

    I like the way your have developed your charters. Did you then time box your testing session? Also did you capture notes, bugs, observations when testing in your session sheet? If not I would definitely recommend you to do so.

    1. To be honest, this was only used as an example, but yes, it would have been good to timebox the session and like you say, capture notes, bugs etc too :)

    2. Hi Sharath ! I’m not much familiar with ‘Exploratory Testing’ methodology, interested to learn its approach. Please help me out to understand “time box of testing session”, capture notes etc

  3. Hi, Great.. Tutorial is just awesome..It is really helpful for a newbie like me.. I am a regular follower of your blog. Really very informative post you shared here. Kindly keep blogging. If anyone wants to become a Java developer learn from Java Training in Chennai. or learn thru Java Online Training in India . Nowadays Java has tons of job opportunities on various vertical industry.


Post a Comment

Popular posts from this blog

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager...
In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools.
Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block. 
One of the more popular tools is that of Microsoft Test Manager, it's big…

What is a PBI?

After my last post, I had the question of what is a PBI... so I thought i'd write a short blog post about what they are and why they are used.

A PBI is an acronym for Product Backlog Item. It is a description of a piece of work that your SCRUM team will develop and deliver. When you have a list of Product Backlog Items, you then refer to that collective list as a Product Backlog.

The product backlog is often prioritised and yourteam will work through each PBI, and release on a regular schedule... I am however going deep into the world of Agile development, which isn't entirely what this post is about, so I will stop myself now.

A Product Backlog Item is made up of the following:

Title - This is often a one liner that gives the team an idea of what the PBI is about, although it can just be an ID for the item and the team work off of that.

Description - Breaks down the PBI in a bit more detail, and can be written in any style, however I prefer it to be written as follows: 

By writin…

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds.

We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit.

        public static void AfterTestRun()
            var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());
            if (!nativeDriverQuit.Wait(TimeSpan.FromSeconds(10)))

        private s…