by

Putting Test Case Specification directly in code

Recently we started to put test case specification of our integration and user acceptance tests directly in code. So before we write any tests, test cases are defined first and put in the code in the form of annotations. This includes specification for tests that will be automated as well as for tests that won’t. Also, we keep the description there even after the tests are written. This has many advantages, for one it provides more human-readable description for all tests. Even if your tests are written in English-friendly manner, there is usually more complicated data/mock setup that is not explained well in the code itself or it is scattered all over the codebase.

For developers and quality engineers, it is usually more convenient to work with the specification directly in code, without going back and forth to an external document. Other very positive feature is that we can easily generate reports. We can see how many test cases every feature has (we can easily tag test cases by features or other properties) and how many of them are already automated/implemented.

We keep high level descriptions of test cases in a test plan outside of the codebase and if there is a need for someone to go through detailed test cases without working with the repository, a report can be generated in any other format (and kept up to date by build server).

The only drawback might be that you need to keep the annotations and tests in sync, but you would have to do that anyway. Note that this might not be the best idea for teams where test cases are written by people who don’t know programming.

Tools

For JVM based projects, we use Arquillian Governor. You can read more about it in Arquillian Governor 1.0.0.Alpha1 Released. Basically it enables us to use test case specific Java annotations on test methods and generate reports from them.

We also write test case annotations in JavaScript, but we are not using any specific tool. At the moment, we just write custom annotations like this:

We are going to implement some reporting on it in the future. There are already many comment parsers for JavaScript available so it shouldn’t take a lot of work.

Going further

Some people go even further and write test cases in business-readable DSLs that resemble human languages. One example of a testing framework that allows you to do that is Cucumber. Cucumber has many implementations for various languages, so it has the simple advantage of reusing knowledge of a particular DSL for writing any kind of test.

A test case executed by Cucumber can look like this:

In my opinion, implementing rules for domain-specific language like this brings more complexity to writing tests and might not be worth the effort if all members of the team are engineers. It can be however very helpful when people writing test cases are not skilled in programming.

[Written when I was working as a Quality Engineer at Red Hat]

Loading Likes...

Write a Comment

Comment