Distributed Browser Testing With Selenium and CruiseControl

Introduction

Selenium RC provides an excellent framework for automating UI tests. The issue that we have had with this approach in the past is that we have struggled to automate the use of these tools as no single platform can run the full set of browsers we need for our regression testing.

I missed the google London Test Automation conference but google video has an excellent presentation by Jason Huggons from the day on exactly these topics. Jason suggests using subordinate machines to perform browser specific tests after deployment. Inspired by this I’ve rolled up the following as a proof of concept :

Overview

The top level build is just a normal instance of CruiseControl running any normal build/unit-test/deploy cycle. If you are not familiar with CruiseControl, in a nutshell it allows you to fully automate the build/test/deploy cycle with ant. It can be configured to trigger builds automatically after source code is checked into the repository, and report through a variety of mechanisms on completion.

CruiseControl also provides a JMX interface which is what I am using to launch the Selenium cruise builds. i.e. on completion of my test/deploy cycle I use an ant target as follows to trigger the remote Selenium builds :

Process flow

Flow

The Selenium client ant script being run from CruiseControl pulls the latest tests out of cvs and executes them. Each Selenium cruise instance can have a different execution target (set in the config.xml ant tag).

Using the antcall tag in the target build its possible to generalise the script so you can perform the set of tests on multiple browsers on the client.

This is a sample of the ant file I use for this :

Loading properties from ${user.name}.properties

The produced result files are checked back into the main repository so that they are available to all. The standard cruise mail notification and web site links also provide this information.

In his talk, Jason described how he was experimenting with capturing the running tests themselves as screen cams for review at a later date. I’ve not pursued that route for now as it seems to be pretty difficult to do in a totally cross platform way.

I’m hoping that this will help speed up some of our regression testing.