menu

Acceptance test automation

Acceptance tests are used to verify that a story (requirement) is complete. Automating these tests is common and it helps regression testing and iterative development. Even though these kinds of tests are extremely powerful and have improved quality of projects delivered, often they are not being used to their full potential. This article provides some hints and tips to help you make the most of your automation effort.

Introduction

Acceptance tests are used to verify that a story (requirement) is complete. Automating these tests is common and it helps regression testing and iterative development. Even though these tests are extremely powerful and have improved quality of projects delivered, often they are not being used to their full potential. Many projects need to be tested in a system test or pre-production environment before going live. It is common for this testing to be conducted by another team who repeat a lot of what is covered in the automated acceptance tests either manually or with another tool. This may be done because the acceptance tests cannot easily be run in different environments or it is not easy for the test team to understand what the tests are doing and the data being used is hidden in the code making it hard to find or change.

Hiding the intention of the test in the code makes it difficult to understand what a test is trying to do. It also means non-technical team members cannot contribute as easily to the testing effort and technical people spend more time working on automated test code. When the tests need to be refactored, which it is inevitable that they will (it is code after all) separating the intention from the implementation means you can focus on refactoring how the test is implemented, not what the test is supposed to be doing.

Separating intention, implementation and data

Below is a diagram that shows the idea of separating intention, implementation and data. Environment is also included as it is important to understand which environment the tests will be running against and change the data and implementation of the test as appropriate.

The implementation makes up words of a DSL, the intention uses those words to form a paragraph (or sentence), which is DSL words put together to form a test scenario that can be easily read. Creation of the intention can be completed by non technical team members in a language that is independent of the implementation. This is important as it means the implementation can be refactored without the need to change the intention of the test. The opposite is also true; it is possible to change the intention of the test without impacting the implementation, assuming that there exists an implementation that satisfies all words in the paragraph of the intention. If at worst you decide to throw away all of the implementation and start again, you can still retain the test intention and therefore do not lose the most important part of the automated tests, what the tests are trying to do.

Data should also be split into intention and implementation, by doing this the data can be referred to in the test by abstraction and the implementation is then removed. For example a test may specify a "Gold Customer" , how the customer is identified is not part of the test but instead part of the data layer. This then makes it possible to change the way the data is sourced, it may come from an excel spreadsheet that is filled in by hand, or extracted from a database, xml file or other means. Creating a data layer and data abstraction are not new techniques; these are commonly used for application code and should be applied to automated tests because the benefits are still applicable.

A worked example

Below is an example of how this could be implemented, the data is stored in an Excel spreadsheet, the intention is captured in concordion and the test implemented using Selenium. This approach has also been implemented with Quick Test Pro and I believe it could be done using Twist, the code below is just an example.

Concordion captures the intent of the test as a html file and uses span tags to determine which methods to call:

public void processFile(String customerType) {

    Connection c = null;
    Statement stmnt = null;
    try {
        Class.forName( "sun.jdbc.odbc.JdbcOdbcDriver" );
        c = DriverManager.getConnection("jdbc:odbc:Driver= ;DBQ=" + fileName);
        stmnt = c.createStatement();
        String query = "Select * from ["+tableName+"$] where CustomerType = '" +
            customerType + "'";
        rsRecords = stmnt.executeQuery( query );
        customerList = new ArrayList16();
    } catch (Exception e) {
        e.printStackTrace();
    }
}

In the test case the data layer is called to extract data from the data source (Excel in this case) for the appropriate customer type:

public List customerProvider(String customerType) {
    return file.getCustomerList(customerType);
}

The test case then iterates over all records in the List and performs the test on each customer record:

public boolean loginAndVerifyStartingBalance(List customerItr) {
    while ( customerItr.next() ) {
        CustomerRecord customer = new CustomerRecord();
        customer.setUsername(rsRecords.getString("User Name"));
        customer.setPassword(rsRecords.getString("Password"));
        customer.setWelcomeMessage(rsRecords.getString("Welcome Message"));
        customerList.add(customer);
    }
    return true;
}

A successful test run results in an html file that looks like: