Skip to main content

Unicom - 12th NextGen Testing Conference

I recently (though it seems a long time ago now... as I've just got round to writing this!) had the pleasure of attending the Unicom - 12th NextGen Testing Conference in London, I was lucky in that I won free tickets, so a free conference is definitely appealing! :)

I thought it would be good to write a blog post detailing the conference, what I learnt, what I got out of it and if I enjoyed it...

The conference itself was chaired by Donald Firesmith, he was over from Pittsburgh, and was chairing the conference and also giving a talk about the Many Types of Testing & Testing Philosophies.

Many Types of Testing & Testing Philosophies

This was a very interesting talk, and possibly one of my favourites, it opened my eyes a bit, and I've been in testing for 8 years, but there were some types of testing in there that I had not heard about, and even some that I knew of that weren't included (which I had a chat with Donald about afterwards and agreed that they should be in there - Localisation testing (ensuring the correct content is delivered based on localisation and even personalisation as well as Persona based testing - taking on the role of a user and acting in that way).

He also made a statement that was a bit scary, but probably very true:

"Testers are aware of only a minority of the types of testing, and test managers/leads even less"

This is scary because, well, it's true. I'm more hands off than I used to be as I have become a lead, and it's important for me to stay up to date with the different testing methodologies and philosophies that are used in testing today, and reading blogs/articles on line is one way of staying up to date, but it's putting them into use when you really learn stuff. It's definitely something that I want to work on, and will hopefully get the chance to over the next few months by working more closely with the teams and senior testers.

The next talk was also very interesting, and was given by Colin Deady a Test Manager @Capita IT.

Behaviour Driven Development - You can deliver Zero known Defect releases

This talk was, as the title suggests, was about how BDD can help deliver zero defects. It's around the team mindset and signing up to deliver quality software, so whilst BDD can help deliver zero "known" defect releases (and notice the "known"), it's a whole lot of other things that together deliver quality software.

Things like the team signing up to fix defects within certain time frames, signing up to review and write BDD scenarios and how having a zero defects mentality can in fact kill motivation. You can't force a team to deliver zero known defects, as mentioned above it's about empowering the team and giving them control over how they deliver zero known defects.

There was also a round table session where we could choose to sit at one of the following tables:

- Test Automation
- Agile
- Testing in DevOps

And some others that I can't remember ashamedly, I chose to sit at the Test Automation table, and it was definitely very interesting hearing where people are at with regards to their automation journey, I call it a journey, but it seems like a journey that will never end. I was pleased to be able to be in a position to offer guidance to others and help them not make the same mistakes that I've read about and seen happen time and time again. There was in particular 2 people from Kent University who were talking about making changes to legacy systems, systems that have zero unit test coverage, and I informed them of the Boy Scout Principle, in that you leave the campsite better than how you found it... Applying that to their problem, meant when you refactor some legacy code or touch it, make it better, add some Unit Tests in etc. That is one sure fire way of improving the quality of your code, slowly but steadily.

The other interesting talk was presented by the amazing Dot Graham...I got talking to her at lunch, and found out that I had read one of her books, so that was pretty cool!

It Seemed a Good Idea at the Time - Intelligent Mistakes in Test Automation

I enjoyed this talk, a lot of it I was already aware of, but she did point me in the direction of a website that I didn't know existed, and have subsequently spent a lot of time reading on. It's definitely worth a look. It highlights a lot of common mistakes that are made when people attempt to do Test Automation, and I think a number of people definitely learned a lot in this session.

The most interesting part of this was how people measure ROI on Test Automation, people responded with the usual, like "more time to do other forms of testing" etc. when in fact ROI is defined by Wikipedia as:

Return on investment, or ROI, is the most common profitability ratio. There are several ways to determine ROI, but the most frequently used method is to divide net profit by total assets. So if your net profit is $100,000 and your total assets are $300,000, your ROI would be .33 or 33 percent.
So it stumped almost everyone I think, it's extremely difficult to quantify Return of Investment of doing Test Automation, and something that is a common question that people who are looking at investing in Automation will be asked "What's the ROI?" the answer itself is very difficult to measure!

The final talk that I'm going to mention was presented by Raji Bhamidipati and was one that I was particularly looking forward to...

Pair testing in an Agile Team

We're all aware of Paired programming and how it helps deliver good quality code, or at least "can help". One thing that isn't mentioned often is around Paired Testing. I was particularly interested in this talk because we had discussed Paired testing in a community meeting and even done the following exercise from Tasty Cupcakes: Pairing for Non Developers so I was interested to see what other people were doing when it came to pairing.

Raji didn't disappoint, obviously she mentioned the benefits of paired testing, complimentary skillsets that work well together and can keep both people engaged. Obviously some people are not going to pair well together, if people don't get on then understandably they will not benefit from this approach. It's also important, and Raji mentioned this, to keep both of them engaged. To do this, Popcorn pairing is good, in that one person is the driver and the other the navigator, and making notes. If both people are not engaged, then it can be a waste of time. 

That said, and with my experience with the pairing exercise mentioned earlier, it's definitely something that I recommend, and not just Paired Testing, but pairing with developers to help write code and spot bugs as they are writing code, most things in life is better when you do it with someone else, and testing/engineering/developing is definitely one of those things!


All in all it was a good conference, whilst I didn't learn as much as I thought I might have, it definitely enforced what I already knew and made me think about certain aspects of test automation especially, opened my eyes a bit towards pairing and the different types of testing that there are and perhaps most importantly it has made me want to present at a conference in the future. I spoke to Rob Lambert and mentioned that one of the struggles that I have is finding something to talk about, he gave me some good advice and that is to talk about my past experiences, and sure enough, I've found something I want to talk about, so watch out! 


Popular posts from this blog

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager...
In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools.
Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block. 
One of the more popular tools is that of Microsoft Test Manager, it's big…

What is a PBI?

After my last post, I had the question of what is a PBI... so I thought i'd write a short blog post about what they are and why they are used.

A PBI is an acronym for Product Backlog Item. It is a description of a piece of work that your SCRUM team will develop and deliver. When you have a list of Product Backlog Items, you then refer to that collective list as a Product Backlog.

The product backlog is often prioritised and yourteam will work through each PBI, and release on a regular schedule... I am however going deep into the world of Agile development, which isn't entirely what this post is about, so I will stop myself now.

A Product Backlog Item is made up of the following:

Title - This is often a one liner that gives the team an idea of what the PBI is about, although it can just be an ID for the item and the team work off of that.

Description - Breaks down the PBI in a bit more detail, and can be written in any style, however I prefer it to be written as follows: 

By writin…

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds.

We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit.

        public static void AfterTestRun()
            var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());
            if (!nativeDriverQuit.Wait(TimeSpan.FromSeconds(10)))

        private s…