Saturday, September 23, 2006

Distributed Browser Testing With Selenium and CruiseControl


Selenium RC provides an excellent framework for automating UI tests. The issue that we have had with this approach in the past is that we have struggled to automate the use of these tools as no single platform can run the full set of browsers we need for our regression testing.

I missed the google London Test Automation conference but google video has an excellent presentation by Jason Huggons from the day on exactly these topics. Jason suggests using subordinate machines to perform browser specific tests after deployment. Inspired by this I've rolled up the following as a proof of concept :


The top level build is just a normal instance of CruiseControl running any normal build/unit-test/deploy cycle. If you are not familiar with CruiseControl, in a nutshell it allows you to fully automate the build/test/deploy cycle with ant. It can be configured to trigger builds automatically after source code is checked into the repository, and report through a variety of mechanisms on completion.

CruiseControl also provides a JMX interface which is what I am using to launch the Selenium cruise builds. i.e. on completion of my test/deploy cycle I use an ant target as follows to trigger the remote Selenium builds :

<target name="trigger_remote_selenium_instances">
<get dest="tempcontents.html" src="http://testmac1:8000/invoke?operation=build&objectname=CruiseControl+Project%3Aname%3Dseleniumtestproject">
<get dest="tempcontents.html" src="http://testlinux:8000/invoke?operation=build&objectname=CruiseControl+Project%3Aname%3Dseleniumtestproject">
<get dest="tempcontents.html" src="http://testxp1:8000/invoke?operation=build&objectname=CruiseControl+Project%3Aname%3Dseleniumtestproject">

Process flow


The Selenium client ant script being run from CruiseControl pulls the latest tests out of cvs and executes them. Each Selenium cruise instance can have a different execution target (set in the config.xml ant tag).

Using the antcall tag in the target build its possible to generalise the script so you can perform the set of tests on multiple browsers on the client.

This is a sample of the ant file I use for this :

<target name="init">
<echo>Loading properties from ${}.properties</echo>
<property file="${}.properties">
<property name="selenium-server-jar" value="./buildlib/selenium-server.jar">
<property name="selenium-timeout" value="180">

<!-- application specific info -->
<property name="root.path" value="/Users/goul/Documents/seleniumtestproject">
<!--NB base URL must not include a trailing slash -->
<property name="baseURL" value="">

<property name="testsuite" value="${root.path}/selenium/suites/my-suite.html">

<!-- this will have the specific browsers appended to it as they are run -->
<property name="baseresult" value="../selenium/results/my-results">

<target name="run-tests-in-all-browsers" depends="init">
<!-- add each browser you want this instance to run here -->
<param name="testbrowser" value="*iexplore">
<param name="testresult" value="${baseresult}-ie.html">

<antcall target="runSeleniumTests">
<param name="testbrowser" value="*firefox">
<param name="testresult" value="${baseresult}-firefox.html">

<target name="runSeleniumTests" depends="init">
<echo message="running tests with browser ${testbrowser}">
<java jar="${selenium-server-jar}" fork="true" failonerror="true">
<arg line="-htmlSuite "${testbrowser}"">
<arg line=""${baseURL}"">
<arg line=""${testsuite}"">
<arg line=""${testresult}"">
<arg line="-timeout ${selenium-timeout}">


The produced result files are checked back into the main repository so that they are available to all. The standard cruise mail notification and web site links also provide this information.

In his talk, Jason described how he was experimenting with capturing the running tests themselves as screen cams for review at a later date. I've not pursued that route for now as it seems to be pretty difficult to do in a totally cross platform way.

I'm hoping that this will help speed up some of our regression testing.

No comments: