Monday, December 14, 2009

MagicTest - an automated visual approach for testing

Introduction

Automated tests have become widely accepted in the software development. Numerous tools like TestNG or JUnit provide great support for writing and running them efficiently. New methods like TDD or BDD strive to integrate testing even more in the development process.

The only which hinders even more acceptance is the fact that writing tests takes additional time. I do not want to dive into the philosophical discussion whether writing tests pays off on the long run, but it remains a fact, that instead of writing just the single method needed in the productive code, the developer has to write at least one additional test method.

So the obvious goal for a test framework must be to reduce the overhead needed for writing and maintaining tests – and to make sure that for the additional time spent, we get the most out of it.

In this article, I will introduce a new approach to testing. It will not replace, but extend the traditional approach, so it can easily be integrated with already available tests and tools.

The new approach offers the following advantages:

  • Creating tests is easier because the expected result does not have to be included in the test source, but is checked using a visual, but automated approach

  • Maintaining tests is easier because all needed information is visible at a glance and changes can easily be made without need to change Java code

  • Documenting tests is done automatically without additional effort in a way that can even be shown to your manager or customer

You think this sounds to good to be true? Let's have a look at a first example.

A first example

In our first example, we want to check that a simple static string concatenation method works like expected. To test this method, we write the following test method:

@Test

public static void concat() {

StaticMethod.concat("s1", "s2");

StaticMethod.concat(null, "s2");

StaticMethod.concat("s1", null);

}


If we run this test using MagicTest, we get the following output:

This looks promising, but how does this work?

The visual approach

To make this work, we have to change the definition when a test is considered successful.

Currently, your test methods are made of various assertions or calls that can throw exceptions. If you run your test, a test is considered successful if it completed without throwing any exception or if it threw an exception that was expected.

If we want to test our string concatenation method with the traditional approach, we end up with test code like that:

assertEquals(StaticMethod.concat("s1", "s2"), "s1s2");


As we can see we have to include the expected result in the test code so it can be compared using an assertion.

To avoid the need to include the expected result in the test code, we specify that our test methods should output the relevant data which is then compared by the test framework against a stored reference output. If the actual and the reference output differ, the test has failed.

So the extended definition is the following:

A test is considered successful if it completed without throwing any exception - or if it threw an exception that was expected - and if the actual output is equal to the stored reference output.

In fact we simply adopt the way a user would do testing without using a test framework: He would test his code visually by looking at the output of println() statements or by inspecting values interactively in the debugger.

So we can say that MagicTest does automate the visual test approach.

Actual and reference output

We have already seen that if a test is run, its output is collected (referred to as actual output).

But how actual and reference output come into existence? Let's look at the typical life cycle:

  • A new test is run the first time. As there is now reference output, the comparison of the output and hence the test will fail.

  • The developer will now examine the collected actual output.

  • If the actual output contains erroneous data, the test program must be corrected first and run again.

  • If the actual output contains what is expected, he will confirm the result and save the actual output as reference output.

  • If the test is then run again, the comparison of the output will succeed and the test is considered successful.

  • If the test is run after some changes to the method under test, the actual output may change. If the actual output changes and is therefore different from the reference output, the test is considered failed.

  • The developer must now again compare the two outputs and decide whether the new actual output should be saved as new reference output or whether the test program must be adjusted to produce the old reference output again.

As we have seen, there is a new concept of comparing actual and reference output and saving the actual output of a test as new reference output. For these steps, we will need support from our test framework.

To make this approach work, the output created by a test method must be stable, i.e. the output should not contain volatile data like the current time.

Implementation

It should now be clear how MagicTests works conceptually. But it still may be unclear how the simple call concat("s1", "s2") in our test program creates the necessary output for comparison.

To generate the needed output without having to manually code these statements, we instrument the byte code before the test method is executed: Each call to the concat() method is instrumented so that parameters, return value, and thrown exception are automatically traced.

Looking at the byte code level, a call to the method under test will roughly look like this:

try {

printParameters("s1, s2");

String result = StaticMethod.concat("s1", "s2");

printResult(result);

} catch (Throwable t) {

printError(t);

}


The data traced out with the print-methods is then collected and visualized as HTML report.

The fact that each call to the method under test is properly documented, will allow us to use conditions or loops in our tests as well if the need arises: It is still clear and documented, what has been tested, even if the test method will become quite complex.

Testing error conditions

The pseudo byte-code has shown that each call to the method under test is surrounded by a try-catch-block. This makes testing error conditions a breeze.

Using the traditional approach, testing error conditions has always been cumbersome. Look at the following two possibilities available:

public static void concatErr1() {

try {

StaticMethod.concat(null, "s2");

fail("should fail");

}

catch (IllegalArgumentException e) {

// expected result

}

}

@Test(expectedExceptions = { IllegalArgumentException.class } )

public static void concatErr2() {

StaticMethod.concat(null, "s2");

}


None of them looks appealing: either you end up with a lot of boiler-plate code in your test method or with a lot of test methods in your test class.

Using the visual approach, we can test error conditions like normal method calls. So most of the time, there will be no need to have more than one test method for a method under test. If you still want to test different behaviors with different test methods, you are free to do this.

The fact that exception are automatically caught will offers us another advantage: If the execution of a single call to the method under test fails, this failure will get documented, but the execution of the rest of the test method will normally continue.

This make correcting failed tests really fast and convenient as we always have all relevant information at hand. With the traditional approach, execution stops after the first error so you have to guess whether subsequent calls can fail as well. And if your guess is not right, you have to run the test again and again.

Creating tests

After having heard about actual and reference output, it is clear, that the test report shown with the first example comes from a run with already saved reference output.

If we run this test the first time, the report will look like this:

As you can see, the report shows the actual ("[act]") beside the expected reference ("[ref]") output. Because we do not have yet a reference output, these lines remains empty and the test failed.

We can now check the actual output of the whole test method at a glance and then save it as new reference output by a single click on the save link using the Eclipse plug-in.

After having saved the new reference output, the test report will automatically be reloaded and you will see that the test is considered successful right now.

Maintaining tests

The advantage of the visual approach for maintaining tests becomes even more apparent if our tests must be adapted due to changes in the method under test.

Let's assume that we have a function for building file paths. As there are many different cases to consider, we have quite a bunch of test cases. Then it has been decided that we should use file URIs instead of local paths. So every returned file path must now have "file://" at the beginning.

With the traditional approach we must now incorporate this change in each single call of the test.

With the visual approach, the test report will show that all test methods have failed, but it will also unveil that the failure is only due to the missing "file://". So we can adapt all test methods at once by a single click on the save link – without need to change any test source code.

Of course it will happen that you also must made changes to the test code after you changed the methods under test with the new approach, but nevertheless maintaining tests will be much faster.

This effortlessness in handling output will motivate developers to really check all relevant data in a test. So if you have to maintain a sorted list and add a new entry, you can easily dump the whole list to check that everything is correct. Nobody will do that using assertions as it is just too painful.

With the visual approach this becomes feasible and MagicTest offers support for this kind of operations with formatters and returners. So your tests gain accuracy if more data is written out and compared, because it is likely that more errors will be caught.

Testing theory says that you should test one behavior with just a single test, but this is often difficult to reach in reality and adds additional costs when writing the tests . While this repeated testing really becomes a problem with the traditional approach where you have to change each test method manually, the visual approach helps you to make all changes easily and quickly.

Sunday, April 5, 2009

Efficiently creating test data with Oracle

When working with an Oracle database, you may have the need to create some test data.
As DUAL contains just a single row, you often abuse ALL_OBJECTS as source for your statement.

But even ALL_OBJECTS has its limitation regarding the number of rows, so what worked for a few thousands rows will suddenly stop working for millions of rows - and then you have to switch to PL/SQL or to another programming language or your choice for this simple task.

But there is a solution in pure Oracle SQL by using hierarchical queries. Have a look at the following statement which helps you to generate as much rows as you want:

-- Using a hierarchical query to generates 1000 rows
select level id from dual
connect by level <= 1000;

Note that level starts with 1, so the query above will create the numbers from 1 to 1000.

You can then you use CREATE TABLE AS or INSERT with a subquery to efficiently create your test data. The needed row data you can easily generate from the LEVEL value by using functions like MOD, DECODE or a CASE statement.

The following example creates a test data with 3 columns and 1000 rows:

create table t1 as
select
level id,
mod(level, 10) count10,
case when mod(level, 10) = 0 then
'----'
else
lpad(level, 4, '-')
end text
from dual
connect by level <= 1000;

Friday, March 27, 2009

Assert for SQL*Plus

Whenever you have to administer an Oracle database, you end up with a bunch of SQL*Plus scripts. Unfortunately, SQL*Plus is not a real scripting language and has therefore no control constructs at all - even a simple if-then-else is missing.

The only option you have is to use

whenever sqlerror exit failure

to exit whenever a SQL statement fails. This can help you to detect and avoid the duplicate use of initialization scripts with DDL statements. On the other hand, it will hardly help to stop the erroneous use of DML statements as they will not fail but simply write out "0 rows updated".

Shortly said, it would sometimes be very useful if you could at least check a simple condition and stop execution if the check fails - a feature widely known as Assert. Using the whenever sqlerror statement, I found a way to provide an Assert-like functionality in SQL*Plus.

Have a look at the script ASSERT.SQL below. It is called with a select statement as condition for the assertion. If the select statement returns any data, the calling program continues. If no data is returned, the assertion is considered as failed and the calling program is stopped.

Here is an example how to use it. Suppose you want to check for the existence of the table MY_TABLE before you execute some DML against it. Your script will look like this:

@assert 'select * from user_tables where table_name = ''MY_TABLE'''
-- your DML statement go here

If the table MY_TABLE does not exist, your program will get stopped with the following error message:

declare
*
FEHLER in Zeile 1:
ORA-20001: Assertion failed: select * from user_tables where table_name =
'MY_TABLE'
ORA-06512: at line 11


Enjoy,
Thomas

--
-- ASSERT.SQL
--
-- Provide Assert-like functionality for SQL*Plus.
--
-- Author:
-- Thomas Mauch
--
-- Usage:
-- Call this script with a select statement as condition for the
-- assertion. If the select statement returns any data, the calling
-- program continues. If no data is returned, the assertion is
-- considered as failed and the calling program is stopped.
--
-- Example:
-- @assert 'select * from myTable where myId = 1'
--
-- Note:
-- - This script sets verify and feedback and changes the behavior
-- in case of SQL errors which must be adapted to your needs.
-- - The use of the Q-quote-operator for the hassle-free use of
-- quoted strings requires Oracle 10g


-- Do not show substitution made
set verify off
-- Do not give feedback about executed commands
set feedback off

-- Stop script if an error occurs
whenever sqlerror exit failure

-- Implementation of the ASSERT functionality:
-- Evaluate the given query. If at least one row is selected, the
-- exists-clause return true and makes the select-into-statement
-- succeed. If no row is selected, it fails with a no data found
-- error.
declare
n integer;
begin
execute immediate
'declare n integer; ' ||
'begin ' ||
q'#select 1 into n from dual where exists ( &1 ); #' ||
'end;';
exception
when no_data_found then
raise_application_error(-20001, q'#Assertion failed: &1#');
when others then
raise_application_error(-20001, q'#Error in assertion: &1#', true);
end;
/

-- Reset error handling
whenever sqlerror exit continue

Wednesday, March 4, 2009

The common shortcoming of Undo

Most of the applications today feature undo functionality for several or even an unlimited number of steps. But unfortunately, this functionality has a common shortcoming where it would be most crucial: when it comes to closing documents.

If the application asks you whether you want to save the changes you made, the following can easily happen: you promptly click discard because you're sure that you have not made any valuable changes or you start thinking what changes you might have made - and as the application wont't tell you easily what you have changed, you finally click discard anyhow.

Obviously this decision can be wrong. And if you notice it just a few seconds too late, your work may be gone - forever... Note that even automatically created backup file normally do not help any further as they typically have been deleted right now - or contain the document in the state when you loaded it and before you started editing.

So the question arises: Why is the undo history related to a specific document? Should there not be a concept like a "global" undo where you can play back your actions as you can on a VCR?

Clearly there are some subtle problems to solve and the "global" undo shold not replace but extend the document specific undo, but nevertheless it would add another net of safety for all users.