Rasmus Møller Selsmark

On software test and test automation

Levels of Software (Test) Automation

clock September 26, 2012 21:52 by author rasmus

While doing software test automation, I have realized that there is a lot more to automate than just the tests, in order to have test automation run successfully and minimizing maintenance. For testing Captia I see the following layers, which I assume is applicable to other software products as well:

image

The important point here is that there exists a foundation for your test automation, which must work properly in order to have a stable environment for running test automation. For long time I have been focusing on the Tests layer of the above diagram, whereas we have recently in ScanJour worked on the lower parts of the “test automation eco-system”. Especially Jens Tidemann, who started here in the beginning of the year have pushed this a lot, by starting to develop PowerShell scripts for setting up environments and system under test.

This blog post describes our solution for automating the process of setting up test environments in a uniform way across products.

Use Source Control for storing automation scripts and configuration

All topics described in this post can be automated, and is in our case. As any other test automation, treat this as a regular development project, i.e.

  • Plan
  • Store all assets (scripts, tools, configuration files etc.) in source control
  • Write unit tests when possible for the tools
  • Consider backwards compatibility when making changes to the tools. In our case the automation supports different versions of the product, which has been really valuable using it for recreating test environments that was previously setup manually.

As described later, even configurations for setting up customized test environments can be stored in source control.

“Test Environments” layer

This layer contains setting up and configuring test environments, i.e. the machines for executing either manual or automated tests. In our case we’re running virtual machines/environments using Microsoft TFS Lab Management, but this layer could be based on another virtualization technology or physical machines/devices.

For setting up the raw machines, we’re currently implementing Microsoft Deployment Toolkit which automates the deployment of Windows images with/without additional software packages like Microsoft Office etc. that is required by the System Under Test.

Another part of automating this layer is PowerShell scripts that is able to perform the following tasks:

  • Join domain (we’re typically running network isolated environments with own domain controller). This requires user to enter credentials, so this is done as the first step. Rest of the tasks are performed automatically.
  • Install general prerequisites, e.g. .NET, IIS
  • Install latest Windows Updates (with an XML based “exclude” list containing e.g. IE9, which we don’t want installed)

With this approach the effort for deploying an baseline environment has been reduced to something like 10-20 minutes of manual work. It of course still takes several hours to deploy the environment in TFS Lab Management, but most of it is done automatically, and therefore doesn’t require user interaction.

image

After environment has been deployed and above steps performed, a “baseline” snapshot is taken which contains:

  • Windows OS
  • Joined the test domain
  • Configured IIS etc.
  • Windows Updates
  • Microsoft Build Agent in order to do automated deployments from TFS
  • Optionally Microsoft Test Agent if used for automated test execution

Whereas the baseline does not contain:

  • Active directory users
  • Prerequisites for system under test, like database drivers etc.

This way we have environments that can be used for multiple purposes. Be careful what you choose to include in the baseline snapshot, and not to “pollute” it, with components that might lock the environment to only be used for too specific purposes. In our case we’re e.g. not installing Oracle database drivers in the baseline environments, since this would mean that only specific version of Captia can be installed.

“System Under Test” layer (with prerequisites)

With a baseline for the test environments as described above, the product to be tested can be installed on either of the available environments, depending on the the type of test to be performed.

image

E.g. if just performing a visitation or testing some simple functionality, one of the small environments can be selected. Alternatively if testing more complex scenarios, there are a few complex environments available for this purpose.

Prerequisites

A key point here is to be able to automate installation of both the product to be tested, as well as prerequisites for the product. Example of prerequisites that can be installed automatically is:

  • Database drivers
  • Third party dependencies

If a prerequisite either takes a long time to install, or is not easily automated, you might select to install this manually and include it in the baseline snapshot. But this means that the environment is then “locked“ and might not easily be used for  other purposes.

System Under Test

With the prerequisites installed, the actual product to be tested can then be installed. Based on the product there might be different ways to do this. If you have multiple teams working on different products, consider making a common way of installing each product, in order to make it possible for other teams to install integration test environments.

We have solved this by developing PowerShell scripts that reads from a XML file which modules to install. Each team then develops the PowerShell scripts needed for installing their product. Below is an example of XML configuration file used for installing Captia on a test environment:

image

“Test Data” layer

Having installed the product, next step is to apply test data. For this purpose we have developed Test Data Repository, which is able to populate the following types of data:

  • Active Directory users and groups
  • Exchange mailboxes
  • Files
  • Test data in Captia (e.g. cases and documents)

As it can be seen from the conceptual overview below, Test Data Repository both has a UI front-end as well as an API.

image

The UI is used in manual test for populating test data, whereas the API can be used by automated tests for creating prerequisite data for the test and asserting results.

“Customizations” layer

If doing customizations, it’s important that all customizations have similar ways of installing, and are supporting silent install. Many of our customers are having minor or larger customizations, that are installed on top of the standard product.

Automated installation of customizations in test environments are configured in the same XML files used above for installing the standard products. This way the XML configuration file also documents the customers configuration, i.e. version of the standard product and add-ons as well as their specific customization.

image

The XML snippet above shows a customization consisting of two packages.

“Customization Test Data” layer

If the customer has any special test data, this is defined in one or more XML files, which is populated into the system using the Test Data Repository described above. It’s important to handle a customization as a standard software product, i.e. don’t develop anything special for a customer, but focus on using same structure and tools for all customizations, in order to be able to automate deployment and test for the customizations as well.

image

The above XML shows test data for a customization, which is applied after the customer solution has been installed in test.

“Tests” layer

With all this we now have a system, optionally with customization, ready for test. Typically deploying an environment with customization takes 30-40 mins including reverting to snapshot, without any user interaction. A huge step forward compared to manually setting up an environment.

TFS Build Definition

As we’re running TFS, the actual deployment is configured in a build definition

image

When starting a deployment, you specify which lab environment to use:

image

And which machines on the environment to deploy to (in this case only the webserver):

image

 

Test automation dependencies includes environments, system under test and test data (as well as many other). Ensuring the foundation is in place is a key factor in developing successful test automation, and I hope this post has given some insights on how we have automated deployment and configuration of test environments in ScanJour.




GOTO Aarhus 2012 - Developers *are* writing functional tests :)

clock September 9, 2012 22:33 by author rasmus

Even though this is a part of my “warm-up” blog posts for GOTO Aarhus 2012 Conference, I’ll start out referring to a related event this week. At our latest meeting in the Danish TFS User Group, held at the Microsoft office in Hellerup, Rune Abrahamsson from BRFkredit had a presentation on how they are using http://cuite.codeplex.com/ for developing functional tests for one of their systems. If you are interested, Mads (our agile coach at ScanJour) has also blogged from the user group here.

Although the people attending the TFS Users Group meetings are mostly developers, we often have at least one testing-related topic on the agenda, and this time it was clear that many of the present developers does have experience with developing functional tests, often using a UI automation framework like Microsoft Coded UI. So even before attending the GOTO Aarhus 2012 Conference, it seems I can conclude that developers are writing functional tests, which pleases me, as I think the people writing the software, also are the best at writing automation for it. The testers can then do actual quality assurance, by ensuring that the customer requirements have been automated, as well as performing exploratory testing to exercise the software in new ways.

IMAG0007

Sorry for the bad picture quality; the slide shows at the bottom which areas (including functional UI tests) are covered by developer tests, whereas e.g. exploratory tests are performed by their manual testers (which are actually domain experts, not full-time testers). I find it very positive to see a development team take quality seriously, by including test automation, when they are in a situation where they are lacking testers. I guess this is not an uncommon scenario in the industry, that you have to get test assistance from other teams/departments.

And it makes me wonder if manual testers should be forced to not do manual scripted test at all, but only do exploratory testing and quality assurance, since the developers will fill the gap themselves, by automating the scripted test cases? Smile

 

To put this in the context of this years GOTO conference, I have been looking at the biographies of some of the speakers, and found a video by Steve Freeman on Sustainable Test-Driven Development, where he touches topics like:

  • Too coupled production and test code, which makes refactoring difficult (beginning of video)
  • Test code structure (similar to Given/When/Then pattern of e.g. SpecFlow)
  • “You come back to the code after 6 months, and forgot why you did this” (approx. 13:00)
  • Patterns for writing test code. As simple as good variable naming, using DSL syntax to have more more readable test code
  • Prepare for your test to fail at some point, simply by having clear error messages when it happens (“Explain yourself”, “Describe yourself” and “Tracer Objects” around 23 mins into the video).
  • Make your tests robust, e.g. “Only Enforce Order When It Matters” (around 40:00)
  • “Tests Are Code Too” (43:35), which is the last slide and also seems to be the headline for this presentation

The last slide is shown here:

image

Although all four bullets here are important, I have myself faced the “If it’s hard to test, that’s a clue” problem, when developing unit tests, both for production code as well as test automation framework code. As an automation tester, I also regularly have the problem when writing automated functional tests against production code, e.g. missing id’s on UI controls, no clear interface for testing etc. When the development team is writing automated functional tests as part of definition of done, it should hopefully result in code better suited for automation. And then the general conclusion, that test automation should be treated like any other code activity.

This year Steve Freeman has a session on Raspberry Pi with extra toppings Monday at 13:20, which unfortunately is a timeslot where I have also some other sessions I would like to see (What is value and Mythbusting Remote Procedure Calls), but it might be that I have to change my mind; as it could also be fun to get an introduction to the Raspberry Pi.

After watching this video, I feel confident that I will meet developers at the GOTO Aarhus 2012 Conference this year with a quality/testing mindset, and I’m looking forward to talk with you about your view on test automation, and how I as an automation tester can bring value and increase the quality of our software products, even if it's no longer a dedicated test automation developer writing the functional tests.

Feel free to comment on this post below.




Code bug vs. Idea bug

clock September 5, 2012 22:43 by author rasmus

In my last post I mentioned “Idea bug”, which I have got a number of questions about. I have taken the term from the GTAC 2011 Opening Keynote – Test is Dead (a good video, by the way). If you go approx 17:00 into the video, the presenter defines it as:

Code bug:
Wrong product behavior

 

Idea bug:
Wrong product!

 

Accompanied by this following fantastic illustration:

image

Hope it is clear why we want to avoid idea bugs Smile




About the author

Team lead at Unity Technologies. Focus on automating any task possible. Author of e.g. http://uimaptoolbox.codeplex.com

Twitter: @RasmusSelsmark

Month List

Sign In