Skip to main content

Engineering on Legacy Code

A recent project I was on meant testing a lot of legacy code,  in fact I think it was all legacy code! So I thought I'd write some bits about what the challenges are or what you should look for.

Firstly, let me start by defining what I mean by legacy code. I have seen definitions of legacy code which state any code without unit tests can be defined as legacy, whilst this is true, I also like to think of legacy code as something that isn't being refactoring, isn't being improved upon, it is what it is, to quote Ronseal "it does exactly what it says on the tin".

The problem with the Ronseal analogy, is that what happens if you can't find the tin? Or you can't make sense of the tin? This brings me onto the first challenge... In that if it is legacy code, and there's no supporting documentation around how it works or what certain features are for, then it makes our lives as testers (and developers) difficult. We have to ask questions over what certain things do, and more often than not the person we ask won't know either. This tripped me up in the projects first release, and I'm not too ashamed to admit, but we had to rollback the release due to a bug about us not truly understanding some legacy feature. This was a good lesson, we learnt from it, and we were far more cautious and inquisitive about future releases. We made sure we understood everything.

Which brings me onto the next challenge/tip. Make sure you understand everything around the legacy code that you are testing, if there's documentation, read it, if there's questions that need answering then ask them. There is no such thing as a stupid question! This will all help drive your testing and helping you decide what and how to test.

Another challenge around how to test legacy code, is that you are often limited by what has been developed in the past, for instance we wanted to perform performance testing on an internal application, but we had no scripts for this, performance testing of this application wasn't ever considered necessary, until now. The problem is that we needed performance testing, as we were increasing the amount of data for certain calls. We didn't have time for anyone to develop performance tests, so we had to decide to perform the testing at a lower level, by testing the sprocs that retrieved and set the data, this gave us enough confidence, and was relatively quick and easy to do.

Finally, with there being no unit tests on the code, and no automated tests that worked, we were forced to do more manual testing than I perhaps would have liked.

Despite the above challenges, we successfully released the project on time. A lot of this was down to how we managed the releases and released small pieces in quick succession, for instance had we released big bang and found the bug that caused the first release to be rolled back, we would have had to rollback everything which would not have been fun!

So there you have it, a few challenges that I came across when testing on a legacy system. What challenges can you think of? This post started with the title "Testing on legacy code", but I think if you replace the word testing with developing for this post, a lot of the points will still hold true,  it's not just about testing but engineering on legacy code. I know you can make the case that the above is everything you should be trying to achieve when testing any code, but I think when testing on  the legacy code, the above points are even more important.


Popular posts from this blog

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager...
In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools.
Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block. 
One of the more popular tools is that of Microsoft Test Manager, it's big…

What is a PBI?

After my last post, I had the question of what is a PBI... so I thought i'd write a short blog post about what they are and why they are used.

A PBI is an acronym for Product Backlog Item. It is a description of a piece of work that your SCRUM team will develop and deliver. When you have a list of Product Backlog Items, you then refer to that collective list as a Product Backlog.

The product backlog is often prioritised and yourteam will work through each PBI, and release on a regular schedule... I am however going deep into the world of Agile development, which isn't entirely what this post is about, so I will stop myself now.

A Product Backlog Item is made up of the following:

Title - This is often a one liner that gives the team an idea of what the PBI is about, although it can just be an ID for the item and the team work off of that.

Description - Breaks down the PBI in a bit more detail, and can be written in any style, however I prefer it to be written as follows: 

By writin…

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds.

We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit.

        public static void AfterTestRun()
            var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());
            if (!nativeDriverQuit.Wait(TimeSpan.FromSeconds(10)))

        private s…