Rasmus Møller Selsmark

On software test and test automation

TechEd Europe 2012 – Thursday (part one)

clock June 28, 2012 22:01 by author rasmus

First session today was DEV345 - The Accidential Team Foundation Server Admin, which didn’t quite stick to the subject, but was more about differences between TFS 2010 and TFS 2012 and how to upgrade, and not general information for people who has taken over responsibility of a TFS Server from someone else.

One of the most important notes from this session was to run Best Practices Analyzer Tool for Team Foundation Server, not only once or twice but on a monthly basis. Some other interesting slides on what is automatically enabled when upgrading from TFS 2010 to TFS 2012 (left) and what has to be manually enabled (right):

IMG_1058 IMG_1060 

 

And some slides on permissions in TFS, which is actually split into TFS, SharePoint and SQL Server Reporting Service at the left. The middle slide (apologize for the bad quality) shows the different permission names in the different parts and finally a “pattern” for configuring one AD per role per team project:

IMG_1064 IMG_1065 IMG_1066

This separation of permissions has not been changed for TFS 2012, so it’s still required to set permissions in TFS, SharePoint and SQL Server Reporting Services. http://tfsadmin.codeplex.com/ was presented which might can help setting permissions.

 

A new feature of TFS 2012 is to create “teams” within a Project, which will the get own product backlog, burndown charts etc. Hope this feature can reduce number of necessary projects in TFS, as projects introduce a lot of limitations on branching, lab environments (which cannot be deployed across TFS projects).

IMG_1070

 

Next I went to DEV340 - Taking Your Application Lifecycle Management to the Cloud With the Team Foundation Service which showed

  • Setting up a hosted TFS on http://tfspreview.com
  • Creating a new project
  • Create a simple MVC 4 solution, check in code and configuring build definition(s) for compiling code on TFS server
  • Create website on Azure
  • Configure deployment build definition, to deploy your website to Azure

All this was done within approx. 30 minutes, and is basically all it takes for a company to set up the infrastructure for source control, builds, work item tracking and deployment. If I should emphasize one thing from this TechEd conference, this is probably it.

Also learned from this session that the TFS team is working in 3 weeks sprints, and releasing to tfspreview.com after every release, which have given them a lot of really valuable feedback from customers on features to improve. Sounds like a really interesting success story on releasing often to production. And for Microsoft it’s also a big change compared to earlier versions of TFS, where they could only ship new feature with service packs. And that Microsoft have 4 people operating the tfspreview.com site, running thousands of accounts. Of course there is still work to do for the "on-premise" TFS admin, but certainly it sounds there is a potential optimization in costs for running TFS this way.

From what I have seen of TFS 2012 here at TechEd, I must say that the web UI is really impressive and in general it seems Microsoft have improved TFS on a lot of areas, and really hope to be able to upgrade as soon as possible when TFS 2012 has been released.

Here are slides on terms for using TFS Preview and currently supported features:

IMG_1084 IMG_1085

 

The worst session I have attended so far was OSP432 - Application Lifecycle Management: Automated Builds and Testing for SharePoint projects (if the speaker reads this, I’m sure he agrees Smile). First slides started out by showing which problems to solve. So far, so good, but note the right slide here mentioning use of PowerShell remoting for deploying the SharePoint solution to a test environment.

IMG_1087 IMG_1088 IMG_1089

This is built into TFS as the build-deploy-test feature, but nevertheless he showed how to customize a build process template in order to execute a PowerShell script.

IMG_1095

This is surely not the way to do it, and if you’re a SharePoint developer reading this, make sure to use the standard deployment build process template shipped with both TFS 2010 and TFS 2012. He also showed how to create a CodedUI test using the standard recorder tool shipped with Visual Studio. Nothing new for me there, except the fact that CodedUI tests can be used for automating SharePoint UI tests. And then he did the mistake of editing the UIMap designer.cs file. The designer.cs file is autogenerated by VS (as it says in the top of the file), and should of course never be modified. See the file name in upper left corner of image:

IMG_1093

 

Breaking the post today into two parts, as it has become quite long already. Go to second part




DSTB Conference 2012 - “From theory to practice”

clock May 9, 2012 21:59 by author rasmus

Along with other testers from ScanJour, I participated in todays Danish test conference held by the Danish Software Testing Board. My “30 seconds commercial” from the conference would contain the following:

First presentation was on the subject “What defines an excellent test manager”, presenting a survey made among European and US companies. Nothing much really to say from that presentation…

From John Fodehs presentation/workshop on metrics:

  • Seen as a production process, the testing activity doesn’t bring any value/features to the software product itself, since we’re not as such modifying the software itself. John used a figure similar to the following, to describe this situation:
     image
  • So what value does the testing activity bring into the process? Luckily there was an answer to this, and it’s “Information”:
    image
  • Back to metrics (which the workshop was about); metrics can help us to understand, assess, predict, improve and finally communicate the progress of our software development project.
  • The ISTQB syllabus says “In order to control testing, it should be monitored throughout the project”, meaning that monitoring metrics should be an ongoing activity and metrics must be SMART (Specific, Measurable, Acceptable, Relevant and Timely)
  • In order to identify metrics, we were shown a technique called GQM (Goal-Question-Metric), where you start to define your goals, then the questions you would like answered and finally identify your metrics. This is shown in the following figure:

    image

    An example could be to have a goal of “Get release overview” and the question could be “Are we ready to release?” and then identify the metrics needed to answer this question.

    Looking forward to use this technique, which I actually think I can find use for in many situations, not only within my professional worklife.
  • Then we had a discussion about “what is software quality”. Of the presented statements I like “Conformance to requirements” [Crosby] and “Fitness for use” [Juran]. I find it important to be able to measure the quality somehow (although it’s not always easy).

In the remaining part of the workshop John presented a “Dashboard” from a real project. Interesting to see, and all in all a good presentation on use of metrics.

Saxo Bank had a presentation about “Agile release test in practice”, where they presented how they have one team that fulltime perform the integration tests, being able to release every second week. I like this concept of having somewhere where an integration test can be applied to all your products, and this certainly an idea that I will bring home. And then this presentation mentioned “There is no testing best practices – only good practices that can be used in a given context” (see http://context-driven-testing.com/)

Nykredit talked about their way to TMMi level 4. Interesting to hear about TMMi and their central test excellence center. But they live in a world of long waterfall-ish projects, so not easy to use their experiences on the agile projects we’re running.

Then finally Dorothy Graham had a presentation on her latest book Experiences of Test Automation (which I’m currently reading – will write a blog post when I’m done with it). I had the pleasure of attending a workshop by Dorothy Graham a couple of years ago on EuroSTAR in Copenhagen; a session I still remember as being really useful. Her book contains 28 case studies on test automation projects throughout the world. Some of her main points (at least how I see them) are:

  • Test Automation is development. You have to consider a test automation project like any other software development project, i.e. have clear requirements, an architecture, sufficient resources etc.
  • Define an architecture for your test automation, otherwise you end up with automated tests that cannot be maintained and then probably will be dropped sooner or later.
  • Build “Keywords” (or actions) that can be reused in different test cases. Again simply a good development practice.
  • Of course pay attention to your failed tests, but remember to review your green tests regularly to ensure they are still correct (also see my previous post)

And from her summary slide:

  • Test automation will become more critical for everyone
  • Management support and understanding is essential for long-term success
  • The right testware architecture is essential for minimizing maintenance and maximizing use of automation
  • Learn from others’ mistakes – life’s too short to make them all yourself!

It’s a little bit surprising at first, that Dorothy Graham is able to talk about test automation without showing any tools, but she certainly does a good job on viewing it from a theoretical level, and my impression is that my colleague domain testers also got a good idea about test automation possibilities and challenges from this presentation.

Actually it was from Dorothy Graham I first learned the concept of preparing a “30 seconds commercial”, so you are prepared when people back at the office asks you how the conference went. Although I’m probably not able to tell all this in 30 seconds…




Pattern for closing ApplicationUnderTest instance in CodedUI tests

clock April 21, 2012 21:30 by author rasmus

In his talk Building Robust, Maintainable Coded UI Tests with Visual Studio 2010 at Belgium TechDays 2011, Brian Keller shows the concept of “using using" (slide 11 in his slide deck) in order to dispose application under test after test has finished.

Of course it’s useful to clean up after your tests, but as I learned the hard way this week (although I‘m certainly not the first one to blog about it…), remember to end your UI based tests, with verifying that your application actually can close without crashing. In our case, a javascript window was blocking execution, which my automated test simply ignored and passed…

My pattern for solving this, is the following code, that besides using an ApplicationUnderTest as suggested by Brian Keller, also tries to close the window (a browser instance in this case) before ending the test case.

/// <summary>
/// Creates and updates a case in Captia UI
/// </summary>
[TestMethod]
[TestCategory("BVT")]
public void TC17015_CreatingAndEditingACase()
{
    // Step: Log on to Captia and click the button “Create case”.
    ApplicationUnderTest browserWindow = CaptiaMainUIActions.OpenCaptia(TestConfig.Instance.CaptiaBaseUrl);

    try
    {
        // Execute test steps/actions...

        CloseCaptiaAndVerify(browserWindow);
    }
    finally
    {
        DisposeBrowserWindowIfNotClosed(browserWindow);
    }
}

private static void CloseCaptiaAndVerify(ApplicationUnderTest browserWindow)
{
    CaptiaMainUIActions.CloseCaptia();
    bool captiaClosed = WaitForm.Wait(
        () => ((browserWindow.Process == null) || (browserWindow.Process.HasExited)),
        TimeSpan.FromSeconds(10), "Waiting for Captia window to close.", 2);

    Assert.IsTrue(captiaClosed, "Expected Captia window to be closed.");
}

private static void DisposeBrowserWindowIfNotClosed(ApplicationUnderTest browserWindow)
{
    if ((browserWindow != null) && (browserWindow.Process != null) && (!browserWindow.Process.HasExited))
        browserWindow.Dispose();
}

The method CloseCaptiaAndVerify() uses my CodedUI UIMap to click the upper-right cross to close the window, and a custom-made WaitForm to wait up to ten seconds (polling every 2 seconds) to verify that window was actually closed. Otherwise test fails.




The killer string :)

clock April 9, 2012 09:22 by author rasmus

I use this string from time to time when testing applications. It verifies/tests the following areas:

  • Unicode (the "лежт" part)
  • SQL injection (the single and double-quotes)
  • HTML formatting (the "<" sign etc)

The string is: <a '% &x " лежт F8E5>

Embarrassingly this was the result after testing it on one of my personal websites :)




About the author

Team lead at Unity Technologies. Focus on automating any task possible. Author of e.g. http://uimaptoolbox.codeplex.com

Twitter: @RasmusSelsmark

Month List

Sign In