Skip to main content

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests. 

As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post... 

So here goes,  when creating automated tests, it's important to consider and adhere to the following:

- Think about data. The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention.
- The tests need to be idempotent - By making it so that each test is standalone and does not affect other tests you will not get random failures if you run the tests in certain orders etc.
- The tests should execute and pass when run in isolation but also in sequence.
- There should be no dependency between tests and it should not matter which order the tests are run.
- The tests should also run on International sites. If they don't apply to one or more sites, use the appropriate tagging - They shouldn't be concerned with language etc. if the element is the same then the test should still be able to run and find the element.
- The tests shouldn't be flaky. Random failures shouldn't happen.
- The tests should be able to be run on any Development environment - There should be no third party dependencies that are not available in any development environment, this way teams will get the most value out of the tests
- The tests should follow some form of existing structure (if one is in place) - We used Page Object model and as such we had a structure that suited this framework.
- The tests shouldn’t take long to run. This can be subjective, but if it takes longer than doing a manual test, then try and come up with a better solution
- Try to reuse code wherever possible, do not duplicate
- Existing Page Objects should also be used where possible
- Changes to shared packages should be run by others first
- Avoid testing too many things at one time in one test, or tests that are “too long”. Ideally, when a scenario fails, you want to immediately know what went wrong.
- Try to keep scenarios within system boundaries. For example, if your test needs some products in the bag, don’t let your test do that through the UI. Do it via the DB.

So that's pretty much it :)  

If you can think of anything else that I might have missed off, that you think needs to be considered when creating an effective and reliable set of automated tests, please let me know!

Comments

Popular posts from this blog

What is a PBI?

After my last post, I had the question of what is a PBI... so I thought i'd write a short blog post about what they are and why they are used.

A PBI is an acronym for Product Backlog Item. It is a description of a piece of work that your SCRUM team will develop and deliver. When you have a list of Product Backlog Items, you then refer to that collective list as a Product Backlog.

The product backlog is often prioritised and yourteam will work through each PBI, and release on a regular schedule... I am however going deep into the world of Agile development, which isn't entirely what this post is about, so I will stop myself now.

A Product Backlog Item is made up of the following:

Title - This is often a one liner that gives the team an idea of what the PBI is about, although it can just be an ID for the item and the team work off of that.

Description - Breaks down the PBI in a bit more detail, and can be written in any style, however I prefer it to be written as follows: 



By writin…

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds.

We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit.

[AfterTestRun]
        public static void AfterTestRun()
        {
            var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());
            if (!nativeDriverQuit.Wait(TimeSpan.FromSeconds(10)))
            {
                CleanUpProcessByInheritance();
            }
       }

        private s…

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager...
In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools.
Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block. 
One of the more popular tools is that of Microsoft Test Manager, it's big…