Rasmus Møller Selsmark

On software test and test automation

Book review: Lean from the Trenches - Managing Large-Scale Projects with Kanban

clock January 4, 2013 07:33 by author rasmus

Last time I read a “The Pragmatic Programmer” series book was “Ship It!: A Practical Guide to Successful Software Projects” some years back, which I remember as being a well-written practical oriented book on developing software. I spent some time in the Christmas holiday reading “Lean from the Trenches: Managing Large-Scale Projects with Kanban” which is also a short, extremely well-written but still comprehensive book describing the authors experiences from building a nation-wide software system for the Swedish police.

Being a tester, I especially found the sections on how they did test and quality assurance on the project interesting. Along with my earlier experiences, this book confirms me that test is becoming an integrated part of software development, and that the role of the tester in future is moving towards more quality assurance, rather than primarily having a focus on executing tests. I still believe there is a future for execution of exploratory tests in future, but scripted tests seems in many cases to be best handled by the developers, even functional tests.

Minimum Viable Product or “How to slice the elephant”

The book starts out on page 6 by describing how they divided the system into small deliverables, in this case

  • Start by deploying the system to one small region
  • First version only supports a small amount of case types, rest is handled the old way
  • Having the customer in-house

A few simple rules, but as the rest of the book shows, they play a very important role in order to ensure a successful project.

Cause-Effect Diagrams

First part of the book describes the project and processes on a more overall level, whereas the last third of the book goes into more details on the techniques used by the team. One of the chapters in this section contains an introduction on getting started with Cause-Effect Diagrams/Analysis, for which the book describes the following basic process (page 134):

1. Select a problem – anything that’s bothering you – and write it down
2. Trace “upward” to figure out the business consequences , the “visible damage” that your problem is causing
3. Trace “downward” to find the root cause (or causes)
4. Identify and highlight vicious cycles (circular paths)
5. Iterate these steps a few times to refine and clarify your diagram
6. Decide which root causes to address and how (that is, which countermeasures to implement)

One of the examples from the book is “Defects Released to Production” from which the problem “Angry Customers” is identified as well as “Lack of Tools and Training in Test Automation” as a root Cause, as shown in the following example:

clip_image001

Taken from page 139 although the diagram in the book is more comprehensive, e.g. by identifying multiple root causes. In relation to Cause-Effect Analysis, the book says on page 52 that

“Bugs in your product are a symptom of bugs in your process

Which is another way to phrase the well-known statement that you cannot test quality into a product, but needs to be an integrated part of the software development process.

From a testing perspective, this book contains a lot of useful information, e.g. chapter 9 “Handling Bugs” visualizes how “continuous testing” helps minimizing the total time spent on testing and bug-fixing, by having bugs fixed as early as possible. It’s been a long time since I’ve met a team only doing test at the end of a release cycle, but still the following figures shows how we can benefit from having “continuous testing”, in order to reduce total cost of developing software.

Traditional waterfall approach:

clip_image002

Using continuous testing:

clip_image003

Even though we spend more time on testing, it pays off by lowering the time spent on bugfixing:

clip_image004

Limit to 30 active bugs

Another interesting idea from this chapter is to set a WIP (Work in Progress) limit on 30 bugs. Bugs that don’t make it into the top-30, are automatically marked as “deferred” immediately and won’t be fixed. If a bug is found important, one of the existing 30 bugs is removed from the list. This is an effective way of limiting the number of bugs, which I think also helps on keeping focus on fixing bugs immediately (even without necessarily reporting them), since you don’t have the option of just piling the bug along the existing probably several hundreds of existing reported bugs. You easily get “bug-blind” when you have reported more than 50-100 bugs in your system, and setting a hard limit to e.g. 30 seems to be a good way to mitigate this issue.

Reducing the test automation backlog

Chapter 18 is dedicated to describing how to handle lack of test automation for a legacy product, where the advice is to have the whole team increase the test automation a little each team. It’s important to notice that it should be the whole team, and not e.g. a separate team, as the whole team should learn and be responsible for writing tests. For identifying which test cases to automate first, the book prioritizes the tests by

  • How risky the test case is, i.e. will it keep you awake if this test is not executed?
  • Cost of executing test manually
  • Cost of automating test case

Beside what I’ve described in this review, the book also contains valuable tips on how to organize your scrum boards, standups etc.

In short this is a really good book, which is furthermore quickly read, so I will recommend it to everyone participating in software development projects.




Using TFS API and remote registry to keep track of what’s installed on our lab machines

clock December 20, 2012 23:16 by author rasmus

As described in http://rasmus.selsmark.dk/post/2012/09/26/Levels-of-Software-(Test)-Automation.aspx, we have put a lot of effort at ScanJour into automating our lab, currently running approx. 400 virtual machines using TFS 2010 Lab Manager. Running this many machines in SCVMM/Lab Manager is a challenge in itself, which I might come back to in a later post. In order to keep track of all these machines, my colleague Kim Carlsen started out developing an internal website (“labstat”) which displays information about:

  • Deployed machines and their state
  • Available RAM and disk space on each SCVMM host
  • Number of machines deployed per TFS project
  • Available disk space in libraries

All of this information is of course available from the SCVMM console, but in order to help the people serve themselves, we are exposing this information to our users. A small part of the site looks like the following:

image

All of this information is pulled from SCVMM using PowerShell scripts. This sample shows how to get machine details using the Get-VM cmdlet:

Add-PSSnapin Microsoft.SystemCenter.VirtualMachineManager

#VMs deployed and in library
$dt= Get-VM | Select @{N="VmName"; E = {$_.Name}}, @{N="LabName"; E = {([xml]($_.Description)).LabManagement.LabSystem.Innertext}}, owner, status, description, hostname, location, creationtime, @{N="Project"; E = {([xml]($_.Description)).LabManagement.Project}}, @{N="Environment"; E = {([xml]($_.Description)).LabManagement.LabEnvironment.Innertext}}, @{N="Snapshots"; E = {$_.VMCheckpoints.count}}, hosttype, @{N="TemplateName"; E = {([xml]($_.Description)).LabManagement.LabTemplate.innertext}}, Memory | Out-datatable
Invoke-Sqlcmd -query "Delete from VM" -Database $Global:VMMDatabase -ServerInstance $Global:ServerInstance
Write-DataTable -ServerInstance $Global:ServerInstance -Database $Global:VMMDatabase -TableName "VM" -Data $dt

Having approx. 400 machines running, another question that often pops up is “Do we have a test environment with version X of product Y?”. The solution for this is the following page that shows detailed information for each environment (apologize the layout, we’re probably not going to win any design awards for this website…):

image

The information presented for each environment is:

  • Environment name and owner/creator
  • “InUse” column simply queries TFS for the “Marked In Use” information, you can set on an environment. The advantage of using this property, is that it can be set without having to shut down the environment, opposed to e.g. changing the description field, which can only be done when environment is not running
  • Machines in the environment, both with the internal name (almost all of our environments are network isolated) and OS
  • Finally the products installed in the environment

We are storing all of this data in a separate database, which is updated every 10 to 60 minutes, depending on the type of information. The SCVMM data (deployed machines, state etc.) is queried every 10 minutes, whereas the information about installed products is a more time-consuming operation and thus only done once per hour.

In order to access remote registry on the lab machines, the firewall must be configured, which is done using this PowerShell function:

function Initialize-FirewallForWMI
{
    Write-Host "Opening Windows Firewall for WMI"
    Start-Process -FilePath "netsh.exe" -ArgumentList 'advfirewall firewall set rule group="windows management instrumentation (wmi)" new enable=yes' -NoNewWindow -Wait
}

Using TFS API for getting information about deployed lab environments

Getting details for environments and machines in lab, is simply a matter of accessing the TFS API. One interesting detail here is that we also get the IP address of the machine, in order to make it easier for people to remote desktop to the machines without necessarily having to open the Microsoft Test and Lab Manager client.

// open database and TFS connection
using (SqlConnection cnn = new SqlConnection(databaseConnectionString))
using (TfsTeamProjectCollection tfs = new TfsTeamProjectCollection(new Uri(tfsUrl)))
{
    tfs.EnsureAuthenticated();
    cnn.Open();

    DataAccess.ResetIsTouchedForLabEnvironmentsAndMachines(cnn);

    LabService labService = tfs.GetService<LabService>();

    ICommonStructureService structureService = (ICommonStructureService)tfs.GetService(typeof(ICommonStructureService));
    ProjectInfo[] projects = structureService.ListAllProjects();

    // iterate all TFS Projects
    foreach (ProjectInfo project in projects)
    {
        LabEnvironmentQuerySpec leqs = new LabEnvironmentQuerySpec();
        leqs.Project = project.Name;
        var envs = labService.QueryLabEnvironments(leqs);

        // Iterate all environments in current TFS project
        foreach (LabEnvironment le in envs.Where(e => e.Disposition == LabEnvironmentDisposition.Active))
        {
            Trace.WriteLine(String.Format("Project: {0}; Environment: {1}", project.Name, le.Name));

            // need to reload in order to get ExtendedInfo data on machines in environment
            LabEnvironment env = labService.GetLabEnvironment(le.Uri);

            DateTime? inUseSince = null;

            if (env.InUseMarker != null)
            {
                inUseSince = env.InUseMarker.Timestamp;
            }

            LabEnvironmentDTO envData = new LabEnvironmentDTO
            {
                Id = env.LabGuid,
                Name = env.Name,
                Description = env.Description,
                ProjectName = env.ProjectName,
                CreationTime = env.CreationTime,
                Owner = env.CreatedBy,
                State = env.StatusInfo.State.ToString(),
                InUseComment = (env.InUseMarker == null ? "" : env.InUseMarker.Comment),
                InUseByUser = (env.InUseMarker == null ? "" : env.InUseMarker.User),
                InUseSince = inUseSince
            };
            DataAccess.Save(cnn, envData);

            // Iterate machines in environment
            foreach (LabSystem ls in env.LabSystems)
            {
                string computerName = String.Empty;
                string internalComputerName = String.Empty;
                string os = String.Empty;
                StringBuilder ip = new StringBuilder();

                if (ls.ExtendedInfo != null)
                {
                    computerName = ls.ExtendedInfo.RemoteInfo.ComputerName;
                    internalComputerName = ls.ExtendedInfo.RemoteInfo.InternalComputerName;
                    os = ls.ExtendedInfo.GuestOperatingSystem;

                    if (!String.IsNullOrWhiteSpace(computerName))
                    {
                        try
                        {
                            IPAddress[] ips = Dns.GetHostAddresses(computerName);

                            foreach (IPAddress ipaddr in ips)
                            {
                                if (ip.Length != 0)
                                    ip.Append(",");
                                ip.Append(ipaddr);
                            }
                        }
                        catch (Exception ex)
                        {
                            ip.Append(ex.Message);
                        }
                    }
                }

                LabMachineDTO machineData = new LabMachineDTO
                {
                    Id = ls.LabGuid,
                    Name = ls.Name,
                    LabEnvironmentId = le.LabGuid,
                    ComputerName = computerName,
                    InternalComputerName = internalComputerName,
                    IpAddress = ip.ToString(),
                    OS = os,
                    State = ls.StatusInfo.State.ToString()
                };

                DataAccess.Save(cnn, machineData);

                string machineDisplayName = String.Format(@"{0}\{1}\{2}", project.Name, le.Name, internalComputerName);
                ExtractPropertiesForLabMachine(cnn, ls, machineDisplayName); 
            } // foreach machine
        } // foreach environment
    } // foreach project

    // TODO: DataAccess.DeleteUntouchedEnvironmentsAndMachines(cnn);
} // using SqlConnection + TFS

Querying lab machines for information about installed products

My first attempt for getting information about installed products on a remote machine was using “Get-WmiObject -Class Win32_Product” in PowerShell, which works but as described on http://sdmsoftware.com/wmi/why-win32_product-is-bad-news/ has the unwanted side-effect that each MSI product queried on the remote machine is reconfigured/repaired. Because of this, and the fact that it took a long time to repair each product on the machine, I decided to implement this using remote registry access instead, and reading values from “HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall” where Windows stores information about installed applications. Both 32- and 64-bit registry is queried.

The code for accessing the remote registry is shown here:

/// <summary>
/// Populates the products for machine using remote registry access.
/// </summary>
/// <param name="machineName">Name of the machine.</param>
/// <param name="products">Reference to products collection, that will be populated.</param>
/// <param name="registryMode">The registry mode (32- or 64-bit).</param>
/// <returns>False, if e.g. access denied, which means no reason to try subsequent reads</returns>
private static bool PopulateProductsForMachine(string machineName, List<Product> products, RegistryMode registryMode)
{
    string registryPath = @"SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall";

    if (registryMode == RegistryMode.SysWow64)
        registryPath = @"SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall";

    try
    {
        using (RegistryKey remoteRegistry = RegistryKey.OpenRemoteBaseKey(RegistryHive.LocalMachine, machineName))
        {
            if (remoteRegistry == null)
                return false; // could not open HKLM on remote machine. No need to continue

            using (RegistryKey key = remoteRegistry.OpenSubKey(registryPath))
            {
                if (key == null)
                    return true; // no error, we just couldn't locate this registry entry

                string[] subKeyNames = key.GetSubKeyNames();
                foreach (string subKeyName in subKeyNames)
                {
                    RegistryKey subKey = key.OpenSubKey(subKeyName);

                    if (subKey == null)
                        continue;

                    string name = GetRegistryKeyValue(subKey, "DisplayName");
                    string vendor = GetRegistryKeyValue(subKey, "Publisher");
                    string version = GetRegistryKeyValue(subKey, "DisplayVersion");

                    if (!String.IsNullOrWhiteSpace(name))
                    {
                        products.Add(new Product { Name = name, Vendor = vendor, Version = version });
                    }
                }
            } // using key
        } // using remoteRegistry

        return true;
    }
    catch (Exception ex)
    {
        Trace.WriteLine(String.Format("An exception occured while opening remote registry on machine '{0}': {1}", machineName, ex.Message));
        return false;
    }
}

private static string GetRegistryKeyValue(RegistryKey key, string paramName)
{
    if (key == null)
        throw new ArgumentNullException("key");

    object value = key.GetValue(paramName);
            
    if (value == null)
        return String.Empty;

    return value.ToString();
}

Before you can access the remote registry using RegistryKey.OpenRemoteBaseKey() you need to authenticate against the machine, which is done by connecting to the “C$” system share on the machine.

// connect to server
NetworkShare share = new NetworkShare(labMachineDnsName, "C$", @"(domain)\Administrator", "(password)");
try
{
    share.Connect();

    // get list of all products installed on machine
    List<Product> products = new List<Product>();

    if (!PopulateProductsForMachine(labMachineDnsName, products, RegistryMode.Default))
        return; // failed to connect to remote registry -> don't try any further on this machine

    PopulateProductsForMachine(labMachineDnsName, products, RegistryMode.SysWow64);

    // filter out products by ScanJour or selected MS apps
    var relevantProducts =
        from p in products
        where (p.Vendor.Equals("ScanJour", StringComparison.OrdinalIgnoreCase))
            || (p.Name.StartsWith("ScanJour", StringComparison.OrdinalIgnoreCase))
            || (p.Name.StartsWith("Microsoft Office Professional"))
            || (p.Name.StartsWith("Microsoft Office Enterprise"))
            || (p.Name == "Microsoft Visual Studio 2010 Premium - ENU")
            || (p.Name == "Microsoft Visual Studio 2010 Professional - ENU")
            || (p.Name == "Microsoft Visual Studio Premium 2012")
        select p;

    foreach (Product product in relevantProducts)
    {
        LabMachinePropertyDTO data = new LabMachinePropertyDTO
        {
            LabMachineId = machine.LabGuid,
            Id = "Product",
            Value = String.Format("{0} ({1})", product.Name, product.Version)
        };

        Trace.WriteLine(String.Format("Adding product '{0}'", data.Value));

        DataAccess.Save(cnn, data);
    }

    // Find Oracle version on machine
    string oracleVersion = GetOracleVersionForMachine(labMachineDnsName);
    if (!String.IsNullOrWhiteSpace(oracleVersion))
    {
        LabMachinePropertyDTO data = new LabMachinePropertyDTO
        {
            LabMachineId = machine.LabGuid,
            Id = "Product",
            Value = oracleVersion
        };

        Trace.WriteLine(String.Format("Adding product '{0}'", data.Value));

        DataAccess.Save(cnn, data);
    }
}
catch (Exception ex)
{
    Trace.WriteLine("An exception occured while getting lab machine properties:");
    Trace.WriteLine(ex.ToString());
}
finally
{
    // Disconnect the share
    share.Disconnect();
}

As it can be seen from the code, we’re querying for our own products (vendor or name “ScanJour”) and version of Visual Studio and Microsoft Office installed on the machines. We also display which version of Oracle is installed in the environment, again by simply querying for a specific registry key.

Conclusion

In my previous blog post, I described how we have automated deployment of test environments in the lab. In this post we have covered another important aspect of maintaining a large test lab, namely getting overview of state of lab environments and machines. It of course requires some effort to build up an automation framework around your lab infrastructure, but when in place, it has given us the following benefits:

  • Minimized manual time used on setting up and maintaining lab environments
  • Providing an overview of lab usage for all users
  • Increased predictability when setting up multiple environments, since all configuration of environments is automated
  • Easier to roll out new changes to base templates, e.g. we have a one Domain Controller template, which is updated e.g. with latest Windows Updates regularly. By automating environments, we more often get “fresh” environments, instead of earlier where it was a manual process to set up a new environment, and therefore not done as often.

Main conclusion here is that you should invest some time in automation for your lab. It certainly does cost some time, but now we have it, I don’t understand how we were able to get any work done in the lab before when we were setting up environments manually Smile




GOTO Aarhus 2012 at a glance

clock October 4, 2012 23:32 by author rasmus

The GOTO Aarhus 2012  conference is over, and I can look back at some really good and inspiring days where I had the pleasure of hearing speakers like Martin Fowler talking about NoSQL databases.

IMG_1565

As the title says, it was an introduction to NoSQL databases, but with references back to relational databases and even network databases, which was the predecessor to the relational databases. I’m old enough to have been taught about network databases in university, but nevertheless really interesting to see how a computer science field like databases is evolving. As with many of the other speakers at the conference, Martin Fowler did a very good job of keeping the right balance between theory and practice.

Scott Hanselman was talking about how the browser and the Web has evolved so it’s now possible to even run a Commodore 64 or Linux emulator using JavaScript. As he said, “JavaScript is The Assembler Language of the Web”:

IMG_1395

This was followed up Anders Hejlsberg and some of his team members from Microsoft, when they presented TypeScript, which is a new type-safe language with classes that compiles into JavaScript. Of course the TypeScript compiler is written in JavaScript Smile (Which is quite useful, as it then easily can run on different operating systems)

IMG_1557

 

Michael T. Nygaard and several others had some really good sessions on Continuous Deployment. Michael talked about the concept of “deployment army”. Think we all know of situations where a deployment have required special skills.

IMG_1464

One of the purposes of Continuous Deployment is to eliminate the need for big releases, by building up confidence around the software product, so it really always is in a releasable state. Then you can decide to release it or not, depending on your type of software and customers. And this is only a few of all the speakers I’ve heard in the last three days (actually the ones where I had some decent pictures...)

 

As said, it has been some fantastic days at the GOTO conference. The conference has been able to find just about the right balance between theory and practice in the sessions. Only very few sessions have been either too theoretical or practical oriented. Also when comparing this conference to Microsoft TechEd, which I attended earlier this year, I find the GOTO conference to have a much higher level, both when it comes to speakers and topics.

So if you are considering only one conference for the next year (and GOTO is able to keep the high professional level of speakers and topics) I will definitively recommend that you consider attending the GOTO conference. At least I certainly hope that I can be part of it again next year.




GOTO Aarhus 2012 - Day 2: What, no testers?!

clock October 2, 2012 22:35 by author rasmus

The good thing, for me as a tester, by attending a developer conference like GOTO, is that I get a different perspective on my profession, which I probably wouldn’t have gotten by attending a usual test conference. Today I saw a couple of sessions on Continuous Deployment, most remarkably a presentation by http://www.etsy.com/ in which it was shown how they do 30 deployments to production daily.

A common anti-pattern in relation to deployment, is the concept of a “deployment army” as presented by Michael T. Nygard in his presentation, where as a result of having few deployments, each release is so large that you need an “army” of people to deploy it. This gives a high cost on deployment, so you cannot afford to increase the number of deployments, and the result is a kind of dead-locked situation that you need to get out of. Continuous Deployment may solve this, as soon as you have accepted that you have a problem.

The benefits of doing Continuous Deployment is that the software development team gets almost immediate feedback on deployed changes, without having to wait for some time before releasing into production. Below is an example from Etsy, where they after a release noticed an increase in the number of warnings (the vertical red line at approx. 16:05). Within 10-15 minutes they reacted to the situation and was able to release a new version which fixed most of it, and 30 minutes after the incident, the issue was solved completely. Of course we want to avoid such issues, but when they happen, it’s important that the organization can react fast. So the benefits are basically that we get really fast and relevant feedback to changes to code. I’ll get back to what implications this has to an organization.

As stated by several speakers today, this of course requires the architecture to be prepared for this way of deployment, especially not having one big monolithic system, but being able to deploy individual components of the system. For the people being skeptical to how Continuous Deployment can actually work, Jez Humble mentioned in his presentation that when Flickr was acquired by Yahoo in 2005, they compared Flickr (which were deploying continuously to production) against all other Yahoo services, and it turned out that Flickr had the highest uptime. So Continuous Deployment certainly can work for high uptime sites, and after hearing the presentations today, it is certainly the case.

But lets take a look at a deployment process in general:


(NFR Test: Non-functional requirements testing)

This looks very much similar to a deployment cycle that I’m used to, although having some of the steps being manual or partly manual in my situation. In Continuous Deployment this must be performed several times per day, which leaves very limited time (actually none) for any manual task involved. So moving towards Continuous Deployment means that all of the processes, including test, must be automated. As it’s the developers that are monitoring the production site (performance, errors etc.) there are even no dedicated system operators involved. The DevOps (developer/operations) role is a part of the individual development team.

To follow up on the title of this blog post, I got a couple of minutes to talk with Michael T. Nygard (independent consultant) and Mike Brittain (Etsy), about the role of test automation. The very interesting answer from them was that they don’t have dedicated test automation engineers in Etsy and Michael said that he doesn’t see this role in new organizations, but rather see a role of SRE (software reliability engineer) around occasionally. Amazon is also present at the conference with a booth, which basically seems to be a recruitment campaign among Danish developers. I took a look at their job postings, and it turns out that they have no test automation engineer jobs either. I heard from others that Netflix (a TV/video streaming site) in their presentation also mentioned that they have a very limited number of testers.

Conclusion on state of test automation in the industry

I started out my series of GOTO blog posts at http://rasmus.selsmark.dk/post/2012/08/27/GOTO-Aarhus-2012-Is-it-time-for-developers-to-move-beyond-unit-tests.aspx by asking the question

“Is it common to have automation testers in the industry (like my current job) or is automation a part of usual software development activities?”

After today, I must say that the conclusion is that it’s not common to have dedicated testers (at least not in newer companies) and in order to do Continuous Deployment fully, dedicated testers becomes a bottleneck, and therefore this role does not exists in such an organization. And in the case of Etsy, they even have very few (<5) people doing manual exploratory testing out of a total of 350 employees. So the game is certainly changing for testers, if you move towards more often releases. I don’t feel concerned as such, as I’m sure any good tester has valuable domain knowledge, which can be used elsewhere in the organization, but we should be aware that the world is changing for us.




GOTO Aarhus 2012 - Day 1: New infrastructure creates new types of companies

clock October 1, 2012 23:12 by author rasmus

Mondays keynote at the GOTO Aarhus conference this year was held by Rick Falkvinge who is leader of the Swedish Pirate Party, a political party working for free information and a free Internet. Title of the keynote was “Red flags on the internet”, which refers to the “Red Flag Act” of 1865, where a law was introduced in United Kingdom, which required that a person should walk in front of every car waving with a red flag, to warn pedestrians.

image

It turned out that this law was lobbied by the British Railways in order to secure their interests. But the result was that Germany this way got 20 years advantage for their automobile industry, so it ended by hurting Britain more than it helped. Rick went on to other examples from the history, ending with Kodak, who actually invented the digital camera back in 1976, but because their income depended heavily on their analog film products, they didn’t develop this new digital technology further and eventually went bankrupt in January 2012. So the point here is that it doesn’t help protecting/hiding information and inventions, as it will at at some point emerge anyways.

Even though the Swedish Pirate Party is currently mostly a protest party, he drew lines back to the Green politics 40 years ago, which is now a common part of the official program for all political parties. He didn't give a good answer to how we can actually still get paid if all information is freely available, but I still find it interesting to see how history has shown that information and new technologies cannot be hidden away.

New infrastructure introduces new types of companies

With the shift from e.g. water mills to electricity, it was no longer necessary to build factories close to the energy source, but could rather be built where the actual need was. Unfortunately I didn’t get an image of the actual slide, but I do find it thoughtful this similarity with how we are still constrained today, e.g. of where companies can be built, so I think we need an image of a good old watermill-powered factory from Wikipedia. I’ll get back to this shortly.

Braine-le-Château_JPG02

RubyMotion - a company of the Internet age

This brings me on to Laurent Sansonetti, founder of RubyMotion which is a platform for developing Ruby-based applications on e.g. iPhone. I was so fortunate to have a talk with Laurent about his company.

IMG_1367

Laurent have worked as developer for several large companies, including 7 years at Apple, before he decided to work on his own open-source project RubyMotion. Moving from a secure job at Apple and starting his own, did result in a couple of very insecure months with different challenges, and sometimes fear that the product would never made it. But it went well, and they now have several large customers and are making money. Congratulations, as it’s good to see that there is always room for new innovative products.

The structure of the company is what Laurent calls a “distributed startup”, meaning that the current three employees are spread across the entire world (Belgium, United States and Japan) and don’t have any office yet. They work from where they decide, more or less at the time they decide and communicate using CampFire, which is a group-based chat system, where they can have ongoing discussions, and see what has been talked about while they were gone. Because being spread across the world this way, means that the company is more or less “open” always. Then they meet every third month for a week physically, in order to actually meet and have face-to-face meetings.

Other advantages of being distributed this way, is that you actually get easier access to talent, as they don’t need to relocate in order to work for you, and perhaps otherwise wouldn’t be able to take the job. As Laurent said, it’s hard to find good programmers and even harder to have them relocate.

I think this is a good example of a company that leverages the possibilities of the internet age, to build a startup company that a few years ago would have been difficult to imagine. Especially it’s worth noting that with only 3 people they are able to keep the company running most of the day. Like back when factories had to built close to an energy source, I still think that (IT) companies of today are constrained to some extend of having access to talents or other kind of infrastructure. Although I actually like going to the office (working zone) every day, I think it’s worth paying attention to the possibilities that are available for starting up a new company with very little “deadweight” like physical building, network, servers etc., when all of this can be distributed/hosted on the net.




Levels of Software (Test) Automation

clock September 26, 2012 21:52 by author rasmus

While doing software test automation, I have realized that there is a lot more to automate than just the tests, in order to have test automation run successfully and minimizing maintenance. For testing Captia I see the following layers, which I assume is applicable to other software products as well:

image

The important point here is that there exists a foundation for your test automation, which must work properly in order to have a stable environment for running test automation. For long time I have been focusing on the Tests layer of the above diagram, whereas we have recently in ScanJour worked on the lower parts of the “test automation eco-system”. Especially Jens Tidemann, who started here in the beginning of the year have pushed this a lot, by starting to develop PowerShell scripts for setting up environments and system under test.

This blog post describes our solution for automating the process of setting up test environments in a uniform way across products.

Use Source Control for storing automation scripts and configuration

All topics described in this post can be automated, and is in our case. As any other test automation, treat this as a regular development project, i.e.

  • Plan
  • Store all assets (scripts, tools, configuration files etc.) in source control
  • Write unit tests when possible for the tools
  • Consider backwards compatibility when making changes to the tools. In our case the automation supports different versions of the product, which has been really valuable using it for recreating test environments that was previously setup manually.

As described later, even configurations for setting up customized test environments can be stored in source control.

“Test Environments” layer

This layer contains setting up and configuring test environments, i.e. the machines for executing either manual or automated tests. In our case we’re running virtual machines/environments using Microsoft TFS Lab Management, but this layer could be based on another virtualization technology or physical machines/devices.

For setting up the raw machines, we’re currently implementing Microsoft Deployment Toolkit which automates the deployment of Windows images with/without additional software packages like Microsoft Office etc. that is required by the System Under Test.

Another part of automating this layer is PowerShell scripts that is able to perform the following tasks:

  • Join domain (we’re typically running network isolated environments with own domain controller). This requires user to enter credentials, so this is done as the first step. Rest of the tasks are performed automatically.
  • Install general prerequisites, e.g. .NET, IIS
  • Install latest Windows Updates (with an XML based “exclude” list containing e.g. IE9, which we don’t want installed)

With this approach the effort for deploying an baseline environment has been reduced to something like 10-20 minutes of manual work. It of course still takes several hours to deploy the environment in TFS Lab Management, but most of it is done automatically, and therefore doesn’t require user interaction.

image

After environment has been deployed and above steps performed, a “baseline” snapshot is taken which contains:

  • Windows OS
  • Joined the test domain
  • Configured IIS etc.
  • Windows Updates
  • Microsoft Build Agent in order to do automated deployments from TFS
  • Optionally Microsoft Test Agent if used for automated test execution

Whereas the baseline does not contain:

  • Active directory users
  • Prerequisites for system under test, like database drivers etc.

This way we have environments that can be used for multiple purposes. Be careful what you choose to include in the baseline snapshot, and not to “pollute” it, with components that might lock the environment to only be used for too specific purposes. In our case we’re e.g. not installing Oracle database drivers in the baseline environments, since this would mean that only specific version of Captia can be installed.

“System Under Test” layer (with prerequisites)

With a baseline for the test environments as described above, the product to be tested can be installed on either of the available environments, depending on the the type of test to be performed.

image

E.g. if just performing a visitation or testing some simple functionality, one of the small environments can be selected. Alternatively if testing more complex scenarios, there are a few complex environments available for this purpose.

Prerequisites

A key point here is to be able to automate installation of both the product to be tested, as well as prerequisites for the product. Example of prerequisites that can be installed automatically is:

  • Database drivers
  • Third party dependencies

If a prerequisite either takes a long time to install, or is not easily automated, you might select to install this manually and include it in the baseline snapshot. But this means that the environment is then “locked“ and might not easily be used for  other purposes.

System Under Test

With the prerequisites installed, the actual product to be tested can then be installed. Based on the product there might be different ways to do this. If you have multiple teams working on different products, consider making a common way of installing each product, in order to make it possible for other teams to install integration test environments.

We have solved this by developing PowerShell scripts that reads from a XML file which modules to install. Each team then develops the PowerShell scripts needed for installing their product. Below is an example of XML configuration file used for installing Captia on a test environment:

image

“Test Data” layer

Having installed the product, next step is to apply test data. For this purpose we have developed Test Data Repository, which is able to populate the following types of data:

  • Active Directory users and groups
  • Exchange mailboxes
  • Files
  • Test data in Captia (e.g. cases and documents)

As it can be seen from the conceptual overview below, Test Data Repository both has a UI front-end as well as an API.

image

The UI is used in manual test for populating test data, whereas the API can be used by automated tests for creating prerequisite data for the test and asserting results.

“Customizations” layer

If doing customizations, it’s important that all customizations have similar ways of installing, and are supporting silent install. Many of our customers are having minor or larger customizations, that are installed on top of the standard product.

Automated installation of customizations in test environments are configured in the same XML files used above for installing the standard products. This way the XML configuration file also documents the customers configuration, i.e. version of the standard product and add-ons as well as their specific customization.

image

The XML snippet above shows a customization consisting of two packages.

“Customization Test Data” layer

If the customer has any special test data, this is defined in one or more XML files, which is populated into the system using the Test Data Repository described above. It’s important to handle a customization as a standard software product, i.e. don’t develop anything special for a customer, but focus on using same structure and tools for all customizations, in order to be able to automate deployment and test for the customizations as well.

image

The above XML shows test data for a customization, which is applied after the customer solution has been installed in test.

“Tests” layer

With all this we now have a system, optionally with customization, ready for test. Typically deploying an environment with customization takes 30-40 mins including reverting to snapshot, without any user interaction. A huge step forward compared to manually setting up an environment.

TFS Build Definition

As we’re running TFS, the actual deployment is configured in a build definition

image

When starting a deployment, you specify which lab environment to use:

image

And which machines on the environment to deploy to (in this case only the webserver):

image

 

Test automation dependencies includes environments, system under test and test data (as well as many other). Ensuring the foundation is in place is a key factor in developing successful test automation, and I hope this post has given some insights on how we have automated deployment and configuration of test environments in ScanJour.




GOTO Aarhus 2012 - Developers *are* writing functional tests :)

clock September 9, 2012 22:33 by author rasmus

Even though this is a part of my “warm-up” blog posts for GOTO Aarhus 2012 Conference, I’ll start out referring to a related event this week. At our latest meeting in the Danish TFS User Group, held at the Microsoft office in Hellerup, Rune Abrahamsson from BRFkredit had a presentation on how they are using http://cuite.codeplex.com/ for developing functional tests for one of their systems. If you are interested, Mads (our agile coach at ScanJour) has also blogged from the user group here.

Although the people attending the TFS Users Group meetings are mostly developers, we often have at least one testing-related topic on the agenda, and this time it was clear that many of the present developers does have experience with developing functional tests, often using a UI automation framework like Microsoft Coded UI. So even before attending the GOTO Aarhus 2012 Conference, it seems I can conclude that developers are writing functional tests, which pleases me, as I think the people writing the software, also are the best at writing automation for it. The testers can then do actual quality assurance, by ensuring that the customer requirements have been automated, as well as performing exploratory testing to exercise the software in new ways.

IMAG0007

Sorry for the bad picture quality; the slide shows at the bottom which areas (including functional UI tests) are covered by developer tests, whereas e.g. exploratory tests are performed by their manual testers (which are actually domain experts, not full-time testers). I find it very positive to see a development team take quality seriously, by including test automation, when they are in a situation where they are lacking testers. I guess this is not an uncommon scenario in the industry, that you have to get test assistance from other teams/departments.

And it makes me wonder if manual testers should be forced to not do manual scripted test at all, but only do exploratory testing and quality assurance, since the developers will fill the gap themselves, by automating the scripted test cases? Smile

 

To put this in the context of this years GOTO conference, I have been looking at the biographies of some of the speakers, and found a video by Steve Freeman on Sustainable Test-Driven Development, where he touches topics like:

  • Too coupled production and test code, which makes refactoring difficult (beginning of video)
  • Test code structure (similar to Given/When/Then pattern of e.g. SpecFlow)
  • “You come back to the code after 6 months, and forgot why you did this” (approx. 13:00)
  • Patterns for writing test code. As simple as good variable naming, using DSL syntax to have more more readable test code
  • Prepare for your test to fail at some point, simply by having clear error messages when it happens (“Explain yourself”, “Describe yourself” and “Tracer Objects” around 23 mins into the video).
  • Make your tests robust, e.g. “Only Enforce Order When It Matters” (around 40:00)
  • “Tests Are Code Too” (43:35), which is the last slide and also seems to be the headline for this presentation

The last slide is shown here:

image

Although all four bullets here are important, I have myself faced the “If it’s hard to test, that’s a clue” problem, when developing unit tests, both for production code as well as test automation framework code. As an automation tester, I also regularly have the problem when writing automated functional tests against production code, e.g. missing id’s on UI controls, no clear interface for testing etc. When the development team is writing automated functional tests as part of definition of done, it should hopefully result in code better suited for automation. And then the general conclusion, that test automation should be treated like any other code activity.

This year Steve Freeman has a session on Raspberry Pi with extra toppings Monday at 13:20, which unfortunately is a timeslot where I have also some other sessions I would like to see (What is value and Mythbusting Remote Procedure Calls), but it might be that I have to change my mind; as it could also be fun to get an introduction to the Raspberry Pi.

After watching this video, I feel confident that I will meet developers at the GOTO Aarhus 2012 Conference this year with a quality/testing mindset, and I’m looking forward to talk with you about your view on test automation, and how I as an automation tester can bring value and increase the quality of our software products, even if it's no longer a dedicated test automation developer writing the functional tests.

Feel free to comment on this post below.




Code bug vs. Idea bug

clock September 5, 2012 22:43 by author rasmus

In my last post I mentioned “Idea bug”, which I have got a number of questions about. I have taken the term from the GTAC 2011 Opening Keynote – Test is Dead (a good video, by the way). If you go approx 17:00 into the video, the presenter defines it as:

Code bug:
Wrong product behavior

 

Idea bug:
Wrong product!

 

Accompanied by this following fantastic illustration:

image

Hope it is clear why we want to avoid idea bugs Smile




GOTO Aarhus 2012 – Is it time for developers to move beyond unit tests?

clock August 27, 2012 12:05 by author rasmus

I’ve been so fortunate to have been invited as official blogger at the GOTO Aarhus 2012 conference this year, which most likely means a number of new readers will be reading this blog post, so I’ll start by giving a short introduction to myself.

10 years ago I shifted from “regular” software development to software testing with a job as SDET (Software Development Engineer in Test) in Microsoft Vedbæk (Denmark), after which a couple of years followed as software developer (but still with focus on test and quality assurance) until I started in my current job as technical lead for test automation in ScanJour two years ago. We run Scrum in teams of 5-8 persons, with usually two domain (manual) testers and one automation tester in each team. The role of the automation tester ranges from automating already defined manual functional test cases to performing load and performance tests.

Now I’ve got the chance to participate in a GOTO conference, which seems to be a true developer conference, by not having “Tester” as an option in the Role field when you register – I registered myself as “Other” Smile

clip_image001

 

So what’s the state of test automation among developers?

During the last 10-12 years in software development, I’ve seen how unit tests has helped to increase the quality of the products we ship, by often ensuring the most obvious bugs being caught early, but not least ensuring the individual classes/units are actually testable (otherwise it wouldn’t be possible to write unit tests against them).

But I still face problems when trying to write automated tests, due to lack of “testability” of the products in the following areas:

  1. No silent installation/uninstallation, or not officially supported (so if you report bugs, they are closed “by design”)
  2. Missing unique id’s on controls in webpages when doing UI tests, which makes it harder to write test automation and makes test cases less stable
  3. No clear testable layers in the application, which often means you have to resort to doing automation on the UI layer
  4. Test is not done in parallel with development, but sometimes when the developer has started working on a new task or even in a later sprint (unfortunately)

#4 is not only a developer-issue, but still a problem for the team, because it means that you risk finishing/shipping code, which either contains bugs or you find out is not testable.

Inspired by ISTQB (a certification for testers), I see the following levels of software tests (simplified, there also exists load test etc.), with unit tests being closest to the code and of course automated, and release acceptance testing often (but not necessarily) being a manual testing activity.

clip_image002

I highly value that testing should be independent from development (even if part of same Scrum team), simply to get another pair of eyes to look at the system; I’ve seen many “idea-bugs” (not understanding customer needs correctly) being caught early by domain testers, that could otherwise have easily slipped all the way to the customer (where the cost of fixing it is higher).

Regarding the issues I raise above concerning lack of testability, I think many of these could be solved by having the software developers developing more automated functional testing, in the same way that unit tests ensures some degree of testability of units/classes. After all it’s in the interest of the whole team to deliver working software to the customers, as effectively as possible.

So the questions I’ll try to get answers to while being at GOTO Aarhus 2012 conference are:

  • What’s the state of automated functional test at all?
  • Are software developers already automating non-unit tests like e.g. functional UI tests?
  • Is it common to have automation testers in the industry (like my current job), is automation a part of usual software development activities or have you actually been able to successfully do capture/replay test automation maintained by domain/manual tester? (I would be surprised if the latter is case)

You are certainly already now welcome to give me input to these topics, by leaving a comment below, thanks.

PS. If you are interested in what GOTO conference sessions a tester finds interesting, you can see my schedule here.




Experiences of Test Automation

clock August 9, 2012 21:56 by author rasmus

Finally finished reading the book Experiences of Test Automation by Dorothy Graham and others. The book is a kind of “essay collection” of case studies, with people describing their approaches and experiences in implementing test automation for various kinds of software systems.

Being composed of different stories, of course there will be some stories relevant and some less relevant. I found the following chapters to be the most interesting ones:

  • Chapter 9 – Model-Based Test-Case Generation in ESA Projects
    It was actually not the model-based content of this chapter I found most interesting, rather its focus on calculating ROI for test automation projects, and when to expect break-even. Always a good idea to include ROI when doing test automation, in order to set stakeholder expectations.

    One of the charts showing break-even for an automation project after 4 test repetitions/cycles, looks like this:


  • Chapter 10 – Ten years and still going
    This was also a case study in their previous book Software Test Automation from 1999. The chapter includes the original case study and then ends with a short conclusion on the project status now. Must say I’m impressed to hear that they now run 5 million automated tests each month.

  • Chapter 15 – Test Automation of SAP Business Process and 16 – Test Automation of a SAP Implementation
    These two chapters gives a good insight into how SAP have automation as a feature of their product (eCATT), i.e. allowing partners to use the built-in automation features to test customizations. Found this especially relevant, as I’m working on making our test automation, running on the standard version of our product, usable for our consultants on their customization projects. I certainly see automation as a feature of a product, and not just being executed by the testers.

    And then I strongly agree with the first statement in the chapter on page 278: “Test automation is nothing but software development” followed by “(…) any test automation project that did not follow software development processes and standards failed or ended up with huge maintenance efforts.”.

  • Chapter 21 – Automation through the Back Door (by Supporting Manual testing)
    With so many different sources for the book, all chapters of course doesn’t have the same level of technical depth. This chapter though contained the following diagram on their Keyword-driven framework implementation. I've not specifically worked with keyword-driven testing, nevertheless I think the diagram gives a good overview of the different components of an automated test case, i.e. test logic/code, input for test case and a test execution engine.


    And then I like the concept of focusing on increasing efficiency of manual test through automation. Scripted test cases tends to find relatively few bugs, so freeing manual testers to do more exploratory testing, by automating any scripted manual test case like BVT, regression test etc., is almost certain to increase the effectiveness of your manual test

  • Chapter 27 – BHAGs, Change, and Test Transformation
    What I find interesting in this chapter, is the following table at bottom of page 503, containing metrics of which test cases that finds bugs from their project:

    Automation Defects Manual Scripted Defects Exploratory Test Defects Fix Verification Defects
    9.3% 24.0% 58.2% 8.4%

    Also matches my experience well, that you shouldn’t expect to find many bugs with automated test cases, but it can free resources to find more bugs using other approaches. The automated tests still brings a lot of value value to a project this way, even if they don't find many bugs.

  • Chapter 28 – Exploratory Test Automation: An Example Ahead of Its Time
    Final chapter is by Harry Robinson, who has pioneered model based testing. Besides from being an introduction into model based testing, the chapter also describes they managed to improve quality of the product, by working close with developers and adjusting models based on changes to requirements. The system test ran for 8 months after release, before the first (low-severity) bug was found. Very impressive indeed, and I found this chapter really interesting to read.

In general, it’s an ok book. As the book presents a wide range of different solutions to test automation projects, and in general lacks technical details of the implementations, I wouldn’t recommend the book to someone new to develop test automation, from a technical point of view. But from a more organizational/theoretical point, this book is relevant, with it’s general conclusions on

  • Test automation is software development, and to increase the probability of being successful you need to follow same rules as for any other software development project
  • You need management support to be successful with test automation
  • Test automation is a long-term investment. You should estimate ROI before starting, in order to set stakeholder expectations
  • Define an architecture for your test automation code.



About the author

Team lead at Unity Technologies. Focus on automating any task possible. Author of e.g. http://uimaptoolbox.codeplex.com

Twitter: @RasmusSelsmark

Month List

Sign In