Skip to main content

Unit Tests? Integration Tests? Acceptance Tests? What do they mean?

I'm currently working with a new team who haven't really worked in an Agile way before, nor do they have much experience of what types of testing you can do on an application, so in preparation I tried to come up with simple definitions on the above types of tests.

I thought it would make a good blog post, as it's somethign that I would undoubtedly find useful at a future point... So here goes:

To define a Unit Test it's a simple test that tests a single piece of code/logic. When it fails it will tell you what piece of code is broken.

An Integration Test is a test that tests the combination of different pieces of an application, when it fails it will tell you that your system(s) are not working together as you thought they would.

An Acceptance Test is a test that tests the software does what is expected by the customer/user of the software. When it fails it tells you that your application is not doing what the customer/user thought it would do or even what it should do.

These are quick, simple and dirty definitions of the different types of testing you might come across in a project, there are more, but these are the ones that I am going through with the team, so these are the ones that have made it into this blog post!

Feel free to agree/disagree/add more...


  1. Your definitions are, more or less, what I would have said. But there is a key distinction that needs to be identified, I think, between Acceptance Testing, and all the rest.

    Instead of the traditional testing "pyramid", think instead, of a modern rail or vehicle bridge.

    All the "lower" forms of testing cover the "vertical" stacks: unit, functional, and even some forms of integration, basically are the tests that insure that the pillars or pylons of the bridge are sound.

    Acceptance Testing, however, covers the "horizontal plane". It is designed to be sure of one basic goal: can the user cross the bridge? Can he get from point A to point B, consistently?

    Why is this distinction important? Well, because it helps to better understand what we mean when we say something is "covered".

    Staying with the metaphor, I can write a Gherkin spec covering a user journey across that bridge that passes consistently for months. What does that tell us with any certainty about the underlying bridge supports? Only that they managed to keep the bridge up while I crossed it.

    But without unit, functional, and integration tests, the Gherkin specs can't know if any particular pillar has hairline cracks in the concrete, or that a faulty girder bolt has sheered and is putting extra stress on the suspension cables, or that debris is building up around the base, which will eventually rot the connecting beams.

    And why is all this important? Because a lot of people point to that old testing "pyramid" and complain about "duplication of effort", not realizing that you're testing *two different things*. The user journey, and the application, are fundamentally two different things. And the testing must reflect that. So yes, it's possible that some unit tests are exercising the same piece of code as a Gherkin spec, but they're doing it *under different conditions*, in different contexts, with different goals in mind. What those folks who complain about duplication are missing, is that testing is not a linear activity, and that product quality is not one-dimensional.


Post a Comment

Popular posts from this blog

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager...
In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools.
Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block. 
One of the more popular tools is that of Microsoft Test Manager, it's big…

What is a PBI?

After my last post, I had the question of what is a PBI... so I thought i'd write a short blog post about what they are and why they are used.

A PBI is an acronym for Product Backlog Item. It is a description of a piece of work that your SCRUM team will develop and deliver. When you have a list of Product Backlog Items, you then refer to that collective list as a Product Backlog.

The product backlog is often prioritised and yourteam will work through each PBI, and release on a regular schedule... I am however going deep into the world of Agile development, which isn't entirely what this post is about, so I will stop myself now.

A Product Backlog Item is made up of the following:

Title - This is often a one liner that gives the team an idea of what the PBI is about, although it can just be an ID for the item and the team work off of that.

Description - Breaks down the PBI in a bit more detail, and can be written in any style, however I prefer it to be written as follows: 

By writin…

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds.

We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit.

        public static void AfterTestRun()
            var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());
            if (!nativeDriverQuit.Wait(TimeSpan.FromSeconds(10)))

        private s…