Skip to main content

QA Vision for the next 12 months

I was recently asked about a vision for QA over the next 12 months, where would I like QA to be and how am I planning on achieving that...

I thought I'd write where I  want QA to be and document the progress over the next year or so, and hopefully achieve most, if not all, of what I want.

Firstly, a big problem where I am currently is performance testing, we hand it over at the end of a project to a third party who then run performance tests on it and come back with results, there are a number of issues that are wrong with this, mainly being we are leaving something that is incredibly important right to the end of a project, so any issues are extremely difficult to fix. So the first thing I want to do is embed Performance Testing right into the sprint, and actually try and do it in an Agile way, I read a blog post here  and really want to try and achieve that, we have the tooling to do it in house, so why wouldn't we do so? Sure it will require some help from experts at the start but eventually we should be able to bring it entirely in house at the end of it. The benefits of this are that we will have less shocks at the end of a project, a smoother release process and a quicker time to release as there is no 2 week period that is needed for performance tests to run. To achieve this we will need better acceptance criteria around performance, and education around best coding practices but this is a big goal and I really want to achieve this. I would argue this is the highest priority of all what I want to achieve over the next 12 months.

Next up is to have some form of induction process for new QAs, currently there isn't one. New QAs are put into teams and there is no induction over system architecture, QA processes, automation framework nothing at all. I want to rectify this, the problem being that this is time consuming, and can vary from team to team slightly, however the goal is to make it as generic as possible whilst still giving huge value to new members. The advantages of this would mean that QA members can hit the ground running quicker and hopefully have less time wasted asking questions and waiting for answers, all the information can be in this induction pack that they will complete.

We do a lot of releases, but as of now there is no automated test pack for a release, we are in the process of rectifying this, by creating an automated deployment test pack that can be run in production and pre-production, and will verify the core functionality of the website. The goal would be to have teams run this test pack as part of CI on a nightly basis, with the benefits being that teams will have confidence that their code is working as it should and the actual deployment and release will be quicker and hopefully smoother. This will increase the time taken to deploy new code, which is essential if we are to achieve more regular releases.

I also feel that we unfortunately have no clear process for security testing, we have done security testing in the past, but I feel that the approach needs to be documented and a clear partnership with the third party established. We need to manage this properly, so we don't end up in a similar position to where we are currently with our performance testing.

I would also like to work on having a clear career development plan with the QA members in the teams, similar to what I've documented here but make it official and so it's clear what skills are strong and what skills QA members need to work on to progress to the next level. I feel that this would also help give visibility over areas that we are lacking in as a QA community so we can look at addressing those weaknesses.

Mobile automation to be used and adopted by all the teams, so both android/iOS applications and the mobile website have some form of an automated test pack that can be run. We know the tooling we want to use, it's just a matter of setting up the framework so that tests can easily be added and created by the teams. (FYI, the toolset is going to be Espresso for android and kif for iOS).

Finally, I wish to develop a strong culture of Research & Development, a place where QAs can work on individual projects that will ultimately benefit the team, I'm not entirely sure how to get this started, but a simple way is to have regular meetings for people to chat about things that they think would be good for their team(s). Then there's also going to conferences and things like that, speaking to other people and finding out what they have worked on and what they have done well and not so well. Maybe even come the end of the 12 months, host an external QA event for the public to come and see and get people speaking at, and realising that ASOS isn't just about fashion, but about the technologies that help deliver it to the multiple platforms.

These are the main points that I wish to achieve, I'm sure there will be others added to this over time, but I'm positive there is enough there to keep me and others busy in implementing the above! This obviously needs buy in from everybody involved, but I strongly believe that if we achieve the above we will have a very strong QA department, and one that is fun and challenging to work in.

I will definitely keep you all posted, do you have a QA Vision for the next year? what do you wish to achieve with your work?


  1. The writer has written this blog in the most artistic way. Splendid!


Post a Comment

Popular posts from this blog

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager...
In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools.
Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block. 
One of the more popular tools is that of Microsoft Test Manager, it's big…

What is a PBI?

After my last post, I had the question of what is a PBI... so I thought i'd write a short blog post about what they are and why they are used.

A PBI is an acronym for Product Backlog Item. It is a description of a piece of work that your SCRUM team will develop and deliver. When you have a list of Product Backlog Items, you then refer to that collective list as a Product Backlog.

The product backlog is often prioritised and yourteam will work through each PBI, and release on a regular schedule... I am however going deep into the world of Agile development, which isn't entirely what this post is about, so I will stop myself now.

A Product Backlog Item is made up of the following:

Title - This is often a one liner that gives the team an idea of what the PBI is about, although it can just be an ID for the item and the team work off of that.

Description - Breaks down the PBI in a bit more detail, and can be written in any style, however I prefer it to be written as follows: 

By writin…

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds.

We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit.

        public static void AfterTestRun()
            var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());
            if (!nativeDriverQuit.Wait(TimeSpan.FromSeconds(10)))

        private s…