Introduction
Automated tests have become widely accepted in the software development. Numerous tools like TestNG or JUnit provide great support for writing and running them efficiently. New methods like TDD or BDD strive to integrate testing even more in the development process.
The only which hinders even more acceptance is the fact that writing tests takes additional time. I do not want to dive into the philosophical discussion whether writing tests pays off on the long run, but it remains a fact, that instead of writing just the single method needed in the productive code, the developer has to write at least one additional test method.
So the obvious goal for a test framework must be to reduce the overhead needed for writing and maintaining tests – and to make sure that for the additional time spent, we get the most out of it.
In this article, I will introduce a new approach to testing. It will not replace, but extend the traditional approach, so it can easily be integrated with already available tests and tools.
The new approach offers the following advantages:
Creating tests is easier because the expected result does not have to be included in the test source, but is checked using a visual, but automated approach
Maintaining tests is easier because all needed information is visible at a glance and changes can easily be made without need to change Java code
Documenting tests is done automatically without additional effort in a way that can even be shown to your manager or customer
You think this sounds to good to be true? Let's have a look at a first example.
A first example
In our first example, we want to check that a simple static string concatenation method works like expected. To test this method, we write the following test method:
@Test
public static void concat() {
StaticMethod.concat("s1", "s2");
StaticMethod.concat(null, "s2");
StaticMethod.concat("s1", null);
}
If we run this test using MagicTest, we get the following output:
This looks promising, but how does this work?
The visual approach
To make this work, we have to change the definition when a test is considered successful.
Currently, your test methods are made of various assertions or calls that can throw exceptions. If you run your test, a test is considered successful if it completed without throwing any exception or if it threw an exception that was expected.
If we want to test our string concatenation method with the traditional approach, we end up with test code like that:
assertEquals(StaticMethod.concat("s1", "s2"), "s1s2");
As we can see we have to include the expected result in the test code so it can be compared using an assertion.
To avoid the need to include the expected result in the test code, we specify that our test methods should output the relevant data which is then compared by the test framework against a stored reference output. If the actual and the reference output differ, the test has failed.
So the extended definition is the following:
A test is considered successful if it completed without throwing any exception - or if it threw an exception that was expected - and if the actual output is equal to the stored reference output.
In fact we simply adopt the way a user would do testing without using a test framework: He would test his code visually by looking at the output of println() statements or by inspecting values interactively in the debugger.
So we can say that MagicTest does automate the visual test approach.
Actual and reference output
We have already seen that if a test is run, its output is collected (referred to as actual output).
But how actual and reference output come into existence? Let's look at the typical life cycle:
A new test is run the first time. As there is now reference output, the comparison of the output and hence the test will fail.
The developer will now examine the collected actual output.
If the actual output contains erroneous data, the test program must be corrected first and run again.
If the actual output contains what is expected, he will confirm the result and save the actual output as reference output.
If the test is then run again, the comparison of the output will succeed and the test is considered successful.
If the test is run after some changes to the method under test, the actual output may change. If the actual output changes and is therefore different from the reference output, the test is considered failed.
The developer must now again compare the two outputs and decide whether the new actual output should be saved as new reference output or whether the test program must be adjusted to produce the old reference output again.
As we have seen, there is a new concept of comparing actual and reference output and saving the actual output of a test as new reference output. For these steps, we will need support from our test framework.
To make this approach work, the output created by a test method must be stable, i.e. the output should not contain volatile data like the current time.
Implementation
It should now be clear how MagicTests works conceptually. But it still may be unclear how the simple call concat("s1", "s2") in our test program creates the necessary output for comparison.
To generate the needed output without having to manually code these statements, we instrument the byte code before the test method is executed: Each call to the concat() method is instrumented so that parameters, return value, and thrown exception are automatically traced.
Looking at the byte code level, a call to the method under test will roughly look like this:
try {
printParameters("s1, s2");
String result = StaticMethod.concat("s1", "s2");
printResult(result);
} catch (Throwable t) {
printError(t);
}
The data traced out with the print-methods is then collected and visualized as HTML report.
The fact that each call to the method under test is properly documented, will allow us to use conditions or loops in our tests as well if the need arises: It is still clear and documented, what has been tested, even if the test method will become quite complex.
Testing error conditions
The pseudo byte-code has shown that each call to the method under test is surrounded by a try-catch-block. This makes testing error conditions a breeze.
Using the traditional approach, testing error conditions has always been cumbersome. Look at the following two possibilities available:
public static void concatErr1() {
try {
StaticMethod.concat(null, "s2");
fail("should fail");
}
catch (IllegalArgumentException e) {
// expected result
}
}
@Test(expectedExceptions = { IllegalArgumentException.class } )
public static void concatErr2() {
StaticMethod.concat(null, "s2");
}
None of them looks appealing: either you end up with a lot of boiler-plate code in your test method or with a lot of test methods in your test class.
Using the visual approach, we can test error conditions like normal method calls. So most of the time, there will be no need to have more than one test method for a method under test. If you still want to test different behaviors with different test methods, you are free to do this.
The fact that exception are automatically caught will offers us another advantage: If the execution of a single call to the method under test fails, this failure will get documented, but the execution of the rest of the test method will normally continue.
This make correcting failed tests really fast and convenient as we always have all relevant information at hand. With the traditional approach, execution stops after the first error so you have to guess whether subsequent calls can fail as well. And if your guess is not right, you have to run the test again and again.
Creating tests
After having heard about actual and reference output, it is clear, that the test report shown with the first example comes from a run with already saved reference output.
If we run this test the first time, the report will look like this:
As you can see, the report shows the actual ("[act]") beside the expected reference ("[ref]") output. Because we do not have yet a reference output, these lines remains empty and the test failed.
We can now check the actual output of the whole test method at a glance and then save it as new reference output by a single click on the save link using the Eclipse plug-in.
After having saved the new reference output, the test report will automatically be reloaded and you will see that the test is considered successful right now.
Maintaining tests
The advantage of the visual approach for maintaining tests becomes even more apparent if our tests must be adapted due to changes in the method under test.
Let's assume that we have a function for building file paths. As there are many different cases to consider, we have quite a bunch of test cases. Then it has been decided that we should use file URIs instead of local paths. So every returned file path must now have "file://" at the beginning.
With the traditional approach we must now incorporate this change in each single call of the test.
With the visual approach, the test report will show that all test methods have failed, but it will also unveil that the failure is only due to the missing "file://". So we can adapt all test methods at once by a single click on the save link – without need to change any test source code.
Of course it will happen that you also must made changes to the test code after you changed the methods under test with the new approach, but nevertheless maintaining tests will be much faster.
This effortlessness in handling output will motivate developers to really check all relevant data in a test. So if you have to maintain a sorted list and add a new entry, you can easily dump the whole list to check that everything is correct. Nobody will do that using assertions as it is just too painful.
With the visual approach this becomes feasible and MagicTest offers support for this kind of operations with formatters and returners. So your tests gain accuracy if more data is written out and compared, because it is likely that more errors will be caught.
Testing theory says that you should test one behavior with just a single test, but this is often difficult to reach in reality and adds additional costs when writing the tests . While this repeated testing really becomes a problem with the traditional approach where you have to change each test method manually, the visual approach helps you to make all changes easily and quickly.