Rasmus Møller Selsmark

On software test and test automation

Highlights from UCAAT conference on model based testing

clock October 31, 2013 23:35 by author rasmus

Attended the User Conference on Advanced Automated Testing (UCAAT) 2013 in Paris last week, primarily covering model based testing, which was a good opportunity for me to get an overview of the usage of model based testing in different areas/industries as well as in which direction model based testing is moving. This post will cover what I found to be the highlights from this conference, namely:

  • How Spotify uses model based testing to bridge the gab between testers, developers and domain experts
  • Microsoft metrics shows that model based tests can be used to find bugs earlier in the development process cycle
  • Model based testing is quite well used in automotive and telecommunication industries
  • Increasing use of TTCN-3 language to write automated test cases

As it can be seen from the conference program, this conference consisted of a relatively high number of short 20-minutes sessions, which meant that many different areas of model based testing was presented, from Spotify to nuclear and flight safety systems, which meant lots of inputs and inspiration, but also a good social atmosphere at the conference, where you could easily find a speaker and ask into their area. The schedule was probably a little bit too tight (30 mins per session would be better), but my impression is that organizing the sessions this way, resulted in a very dynamic conference, with lots of possibilities for networking.

Improving collaboration using Model Based Testing at Spotify

Kristian and Pang from Spotify had a 90-minutes tutorial on their use of Model Based Testing at Spotify using the open source tool GraphWalker, of which Kristian is one of the authors. In contrast to Spec Explorer where models are described in C#, GraphWalker uses a visual designer for modeling, and from their presentation it seems that the model visualizations produced this way are a lot easier to understand people not used to modeling.

IMG_0729

And this was also one of the major points of the presentation, i.e. that these models can be used to improve collaboration and communication within the Scrum teams, as the models can be understood easily by all Scrum team members, Product Owner and other stakeholders.

IMG_0715 IMG_0716

Another very interesting feature of GraphWalker is the ability to combine models, which Spotify uses to let the individual teams build models, which can the be reused by other teams. One example they mentioned, was a model for login, which the "Login-team" could use for extensible testing, whereas other teams depending on this subsystem, would just use it for testing the most common paths through the login process, before entering their own part of the system.

The rise of TTCN-3?

Perhaps due to the fact that I'm not working full-time as a tester these days, I didn't know about the TTCN-3 language before attending the conference. In short TTCN-3 is a language for writing automated test cases, including what seems to be a significant library of helper methods, which was used to demonstrate how to mock a DNS server for testing a DNS query. The following two slides shows the context of testing with TTCN-3, and what I basically got from the presentation and my following talk with the presenter, is that TTCN-3 is “just another” programming language, although designed with testers in mind, i.e. the libraries and structures are targeting the flow of test cases, rather than production code.

IMG_0697 IMG_0701

I met with the presenter of this tutorial and even though the usage of TTCN-3 seems to be increasing, especially within large companies like Ericsson (who seemed very satisfied using it), I find it hard to understand why we need a new language specifically for testing, also taking into consideration that TTCN-3 isn't particularly human-readable, i.e. accessible to non-technical people. I would rather prefer to make the libraries behind TTCN-3 available for use in other programming languages, but there doesn't seem to be any plans for doing so.

It might make sense to use it in a large organization where you have a critical mass of people writing test scripts, but I personally believe that the automation should be written in the same language as the production code, and TTCN-3 hasn’t changed my mind about this. Using same tools and languages on a project for both production and test code makes it a lot easier to communicate and collaborate between people working in different areas.

Finding bugs early using model based testing at Microsoft

Microsoft gave two very good presentations on use of model based testing for testing Windows Phone, which showed some interesting numbers of bugs found as well as when in the development cycle these were found.

MBT_BugTypes MBT_BugPhases

Although I haven’t collected similar statistics from my usage of model based testing, my experience is also that modeling the system helps you find structural/logical bugs during early development phases.

Model based testing at Unity

I also had the pleasure myself to give a presentation at the conference, on our early findings of using MBT for testing Version Control Integration and simple in-game physics of the Unity 3D game engine. The presentation is available here: UCAAT-ModelBasedTesting3DGameEngine-RasmusSelsmark.pdf and was (of course) done as a simple game inside the Unity editor. At least I got several positive comments on the presentation format :)

Overall it has been a good conference. No superstars on this conference (although the guys from Spotify showed some very interesting usages of models for testing), but again many presentations and lots of opportunities to meet and talk with people about how they are using model based testing in their day-to-day work.




Warm Crocodile Conference - Wednesday

clock January 17, 2013 00:50 by author rasmus

I have the pleasure of participating at the Warm Crocodile Conference held by Microsoft in Copenhagen these days along with a couple of colleagues from ScanJour A/S. First thing to mention from this conference is that the venue is rather unusual, namely being held in an “apartment hotel”, which means that the sessions are taking place in big apartments, and in one of the sessions I participated in today, approx. half of the audience had to stand. I might becoming old, but I certainly prefer a good old-fashioned auditorium with nice soft seats Smile

IMG_0135

Enough of that, but let’s focus on the presentations, which so far has been good. The subject of the keynote by Roy Osherove this morning was “Ravine”, i.e. the progress of learning, as illustrated by the following slide.

 

Keynote: Ravines of learning

The talk both had a professional as well as a personal view, i.e. how we professionally develop our skills as well as how having a child also results in a need for acquiring new skills on a personal level.

IMG_0079

The slide above shows the situation where we experience a decrease in productivity e.g. when having to learn new skills or technologies, but after a while the net effect should be an increase in productivity. But an important point here is also that you need to get out of your “comfort-zone” now and then, in order to increase your personal knowledge and productivity. If you stay working on your regular tasks, you will not improve.

Then he talked about “survival mode” (or “fire-fighting” is how I usually name this state) where you/the team are rushing behind tasks, and often or always have to cut corners, typically on quality, in order to meet the deadlines. I apologize the bad quality of the left picture, but I was standing in the back of the room (again, it was surprisingly primitive conditions at the conference, but the content of the talk was good)

image image

You need to be aware when you are in “survival mode” and try to get out of it. Especially if your are lead for a team, you must get the team out of the survival mode as fast as possible, if you identify it has happened. The right slide says that there is always a reason/root cause for a behavior, i.e. nothing simply happens by accident, when you investigate the issue further.

Last part of the talk was about changing behavior, which consists of the following three aspects:

  • Personal
  • Social
  • Environmental

Changing behavior on a personal level might be by having regular 1-1 talks, reviews, followups etc. with the person. On the social level it can be to gather a group of people to watch videos or otherwise introduce new ideas into the group and finally the environmental aspect can be e.g. changing bonus/rewards that’s not encouraging the desired behavior.

 

A different view on layered architecture by Ayende Rahien

Next I went to the talk by Ayende, author of/involved in e.g. NHibernate and RavenDB, on layered architecture and especially on reducing the number of layers when possible. The purpose of introducing layers in your software system is normally to abstract lower parts of the system, and thereby easing the use of those levels and generally make the overall system more maintainable. I’ve seen (and unfortunately also written myself) layers that rather just introduces additional overhead/deadweight to the system, e.g. simple database calls etc.

The talk started out with a historical tour of software development back from end of the 1970ies where computers was mainly standalone systems with limited need for any layers. But as e.g. networks prevailed, a need for abstracting e.g. data access and business logic layers was introduced. The point of the talk was that we should not blindly implement layers and abstractions, just because we can, but ensure that any layer is actually providing a sound abstraction and overall value to the system.

The left slide below shows actual code from Ayendes blog (based on RavenDB), where instead of having a dedicated repository for getting the blog posts, he simply queries them using LINQ, since there is almost none logic involved expect the call to WhereIsPublicPost() extension method. The right slide gives examples of abstractions, with a note to limit the number.

IMG_0097 IMG_0099

Then the talk went on to unit testing vs. integration/functional testing, where Ayende was advocating for focusing on testing the full system, instead of focusing on unit tests which might just “test the abstraction”. E.g. by using an in-memory database, you can still run your integration tests fast and isolated. As I’m primarily working with test, I certainly see the point of focusing on testing the full system. After all our goal is to have a working system, not just passing unit tests. Nevertheless unit tests does “raise the bar” of what needs to be either tested by integration tests or even manually, but we should of course make sure that the unit tests are actually providing value.

IMG_0103 IMG_0102

 

Introduction to Twitter, Facebook and LinkedIn APIs using OAuth

Anne-Sofie Nielsen gave a “Tour de API” of how to access Twitter, Facebook and LinkedIn using OAuth. Unfortunately the light in that room wasn’t good for taking pictures, but her presentation can be found at http://prezi.com/fcyng34ox56t/tour-de-api/. It was an interesting introduction into integrating with social networks, and it’s not unlikely that our case and document management system in some situations can use this.

 

NuGet

After lunch, I went to the first of two talks on NuGet today held by the same speaker. As I see it, we actually started out with the “advanced” session first and closing the day off with the more introductory talk at the end of the day. Nevertheless the two talks in combination has given me valuable input on how to start using NuGet for helping managing dependencies between our internal automation frameworks. In his first talk he had a number of “advices”, from which I found the following most relevant. Especially the slide on version number schema, which has given me some challenges, as our binaries build by Team Foundation Server has the 3-dots format.

IMG_0109 IMG_0110 IMG_0114

Breaking the chronological order here, but as we’re on the subject, I’ll move right on the introduction session on NuGet. From this I found the following three slides informative, describing symptoms of problems in your overall build infrastructure and the term “dependency hell”, which by no mean is new to me.

IMG_0164 IMG_0165 IMG_0166

I even found this MSDN article from 2001 (!) describing the “DLL Hell” as we used to call it back then in the COM days. .NET has reduced much of the pain related to this, but there is certainly room for a tool like NuGet to help manage these dependencies.

 

Unit Testing Good Practices & Horrible Mistakes

Roy, who gave the keynote this morning, also had two sessions on unit testing, where I participated in the last one on best practices (apparently along with half of all conference participants, as most of us had to stand up – this explains the bad image quality on some of the pictures – you get tired in your arms after standing up for longer time Smile)

Good unit tests (and automated tests in general, I would say) should be trustworthy, i.e. doesn’t fail periodically, be maintainable and readable. As the right-most slide shows, this is linked together in the sense that tests not being easily readable are harder to maintain and therefore also likely not to be trusted in longer run.

IMG_0123 IMG_0126 IMG_0127

Characteristics of unit tests is that they

  • Has fast execution time
  • Executes in-memory
  • Has full control over the “test context”, e.g. input values
  • Fake dependencies

If a test does not comply with this, then it’s an integration test (and he mentioned several times that unit and integration tests should be placed in different projects, again to help maintainability)

Some other very valuable points from his talk was:

  • Have a naming convention for your tests. His suggestion is [UnitOfWork_StateUnderTest_ExpectedBehavior] which he has also described at http://osherove.com/blog/2005/4/3/naming-standards-for-unit-tests.html
  • Avoid multiple asserts against different objects in a test. The problem is that if the first assert fails, you will never know if the other object(s) failed or passed their tests. Instead this should be separated into different tests
  • Don’t fake database tests, but perform these as integration tests instead running against the database
  • Max one fake/mock object per test, and preferably avoid using fakes in a test, as this gives internal dependencies on your actual implementation, which might reduce maintainability of both test and production code

Roy has written the book The Art of Unit Testing, which most likely contains all the above advices plus a lot more.

 

Implemeting loosely coupled software systems using Rebus

This talk was by the author of Rebus who gave a live presentation of implementing a messaging based system based on this free open-source message queue/bus, that can handle exchanging messages between different components in a software system.

IMG_0136 IMG_0138 IMG_0142

With help from some pre-prepared code and code-snippets we almost ended up with an implementation of the following system within the one hour presentation, which also contains the mechanics for handling messages that times out.

IMG_0158

Message queuing is certainly not a new technology, but Rebus seems at first sight to be a very useful implementation that can be used to help componentize/modularize a system (rather than developing a big monolithic system).

 

Despite the rather unusual venue place, it has been a good first conference day with nothing but interesting and motivating speakers and topics.

And not least I won a free copy of the “Pro NuGet” book written by the speaker of the two NuGet session. I’m very much looking forward to reading this book Smile




GOTO Aarhus 2012 at a glance

clock October 4, 2012 23:32 by author rasmus

The GOTO Aarhus 2012  conference is over, and I can look back at some really good and inspiring days where I had the pleasure of hearing speakers like Martin Fowler talking about NoSQL databases.

IMG_1565

As the title says, it was an introduction to NoSQL databases, but with references back to relational databases and even network databases, which was the predecessor to the relational databases. I’m old enough to have been taught about network databases in university, but nevertheless really interesting to see how a computer science field like databases is evolving. As with many of the other speakers at the conference, Martin Fowler did a very good job of keeping the right balance between theory and practice.

Scott Hanselman was talking about how the browser and the Web has evolved so it’s now possible to even run a Commodore 64 or Linux emulator using JavaScript. As he said, “JavaScript is The Assembler Language of the Web”:

IMG_1395

This was followed up Anders Hejlsberg and some of his team members from Microsoft, when they presented TypeScript, which is a new type-safe language with classes that compiles into JavaScript. Of course the TypeScript compiler is written in JavaScript Smile (Which is quite useful, as it then easily can run on different operating systems)

IMG_1557

 

Michael T. Nygaard and several others had some really good sessions on Continuous Deployment. Michael talked about the concept of “deployment army”. Think we all know of situations where a deployment have required special skills.

IMG_1464

One of the purposes of Continuous Deployment is to eliminate the need for big releases, by building up confidence around the software product, so it really always is in a releasable state. Then you can decide to release it or not, depending on your type of software and customers. And this is only a few of all the speakers I’ve heard in the last three days (actually the ones where I had some decent pictures...)

 

As said, it has been some fantastic days at the GOTO conference. The conference has been able to find just about the right balance between theory and practice in the sessions. Only very few sessions have been either too theoretical or practical oriented. Also when comparing this conference to Microsoft TechEd, which I attended earlier this year, I find the GOTO conference to have a much higher level, both when it comes to speakers and topics.

So if you are considering only one conference for the next year (and GOTO is able to keep the high professional level of speakers and topics) I will definitively recommend that you consider attending the GOTO conference. At least I certainly hope that I can be part of it again next year.




GOTO Aarhus 2012 - Day 2: What, no testers?!

clock October 2, 2012 22:35 by author rasmus

The good thing, for me as a tester, by attending a developer conference like GOTO, is that I get a different perspective on my profession, which I probably wouldn’t have gotten by attending a usual test conference. Today I saw a couple of sessions on Continuous Deployment, most remarkably a presentation by http://www.etsy.com/ in which it was shown how they do 30 deployments to production daily.

A common anti-pattern in relation to deployment, is the concept of a “deployment army” as presented by Michael T. Nygard in his presentation, where as a result of having few deployments, each release is so large that you need an “army” of people to deploy it. This gives a high cost on deployment, so you cannot afford to increase the number of deployments, and the result is a kind of dead-locked situation that you need to get out of. Continuous Deployment may solve this, as soon as you have accepted that you have a problem.

The benefits of doing Continuous Deployment is that the software development team gets almost immediate feedback on deployed changes, without having to wait for some time before releasing into production. Below is an example from Etsy, where they after a release noticed an increase in the number of warnings (the vertical red line at approx. 16:05). Within 10-15 minutes they reacted to the situation and was able to release a new version which fixed most of it, and 30 minutes after the incident, the issue was solved completely. Of course we want to avoid such issues, but when they happen, it’s important that the organization can react fast. So the benefits are basically that we get really fast and relevant feedback to changes to code. I’ll get back to what implications this has to an organization.

As stated by several speakers today, this of course requires the architecture to be prepared for this way of deployment, especially not having one big monolithic system, but being able to deploy individual components of the system. For the people being skeptical to how Continuous Deployment can actually work, Jez Humble mentioned in his presentation that when Flickr was acquired by Yahoo in 2005, they compared Flickr (which were deploying continuously to production) against all other Yahoo services, and it turned out that Flickr had the highest uptime. So Continuous Deployment certainly can work for high uptime sites, and after hearing the presentations today, it is certainly the case.

But lets take a look at a deployment process in general:


(NFR Test: Non-functional requirements testing)

This looks very much similar to a deployment cycle that I’m used to, although having some of the steps being manual or partly manual in my situation. In Continuous Deployment this must be performed several times per day, which leaves very limited time (actually none) for any manual task involved. So moving towards Continuous Deployment means that all of the processes, including test, must be automated. As it’s the developers that are monitoring the production site (performance, errors etc.) there are even no dedicated system operators involved. The DevOps (developer/operations) role is a part of the individual development team.

To follow up on the title of this blog post, I got a couple of minutes to talk with Michael T. Nygard (independent consultant) and Mike Brittain (Etsy), about the role of test automation. The very interesting answer from them was that they don’t have dedicated test automation engineers in Etsy and Michael said that he doesn’t see this role in new organizations, but rather see a role of SRE (software reliability engineer) around occasionally. Amazon is also present at the conference with a booth, which basically seems to be a recruitment campaign among Danish developers. I took a look at their job postings, and it turns out that they have no test automation engineer jobs either. I heard from others that Netflix (a TV/video streaming site) in their presentation also mentioned that they have a very limited number of testers.

Conclusion on state of test automation in the industry

I started out my series of GOTO blog posts at http://rasmus.selsmark.dk/post/2012/08/27/GOTO-Aarhus-2012-Is-it-time-for-developers-to-move-beyond-unit-tests.aspx by asking the question

“Is it common to have automation testers in the industry (like my current job) or is automation a part of usual software development activities?”

After today, I must say that the conclusion is that it’s not common to have dedicated testers (at least not in newer companies) and in order to do Continuous Deployment fully, dedicated testers becomes a bottleneck, and therefore this role does not exists in such an organization. And in the case of Etsy, they even have very few (<5) people doing manual exploratory testing out of a total of 350 employees. So the game is certainly changing for testers, if you move towards more often releases. I don’t feel concerned as such, as I’m sure any good tester has valuable domain knowledge, which can be used elsewhere in the organization, but we should be aware that the world is changing for us.




GOTO Aarhus 2012 - Day 1: New infrastructure creates new types of companies

clock October 1, 2012 23:12 by author rasmus

Mondays keynote at the GOTO Aarhus conference this year was held by Rick Falkvinge who is leader of the Swedish Pirate Party, a political party working for free information and a free Internet. Title of the keynote was “Red flags on the internet”, which refers to the “Red Flag Act” of 1865, where a law was introduced in United Kingdom, which required that a person should walk in front of every car waving with a red flag, to warn pedestrians.

image

It turned out that this law was lobbied by the British Railways in order to secure their interests. But the result was that Germany this way got 20 years advantage for their automobile industry, so it ended by hurting Britain more than it helped. Rick went on to other examples from the history, ending with Kodak, who actually invented the digital camera back in 1976, but because their income depended heavily on their analog film products, they didn’t develop this new digital technology further and eventually went bankrupt in January 2012. So the point here is that it doesn’t help protecting/hiding information and inventions, as it will at at some point emerge anyways.

Even though the Swedish Pirate Party is currently mostly a protest party, he drew lines back to the Green politics 40 years ago, which is now a common part of the official program for all political parties. He didn't give a good answer to how we can actually still get paid if all information is freely available, but I still find it interesting to see how history has shown that information and new technologies cannot be hidden away.

New infrastructure introduces new types of companies

With the shift from e.g. water mills to electricity, it was no longer necessary to build factories close to the energy source, but could rather be built where the actual need was. Unfortunately I didn’t get an image of the actual slide, but I do find it thoughtful this similarity with how we are still constrained today, e.g. of where companies can be built, so I think we need an image of a good old watermill-powered factory from Wikipedia. I’ll get back to this shortly.

Braine-le-Château_JPG02

RubyMotion - a company of the Internet age

This brings me on to Laurent Sansonetti, founder of RubyMotion which is a platform for developing Ruby-based applications on e.g. iPhone. I was so fortunate to have a talk with Laurent about his company.

IMG_1367

Laurent have worked as developer for several large companies, including 7 years at Apple, before he decided to work on his own open-source project RubyMotion. Moving from a secure job at Apple and starting his own, did result in a couple of very insecure months with different challenges, and sometimes fear that the product would never made it. But it went well, and they now have several large customers and are making money. Congratulations, as it’s good to see that there is always room for new innovative products.

The structure of the company is what Laurent calls a “distributed startup”, meaning that the current three employees are spread across the entire world (Belgium, United States and Japan) and don’t have any office yet. They work from where they decide, more or less at the time they decide and communicate using CampFire, which is a group-based chat system, where they can have ongoing discussions, and see what has been talked about while they were gone. Because being spread across the world this way, means that the company is more or less “open” always. Then they meet every third month for a week physically, in order to actually meet and have face-to-face meetings.

Other advantages of being distributed this way, is that you actually get easier access to talent, as they don’t need to relocate in order to work for you, and perhaps otherwise wouldn’t be able to take the job. As Laurent said, it’s hard to find good programmers and even harder to have them relocate.

I think this is a good example of a company that leverages the possibilities of the internet age, to build a startup company that a few years ago would have been difficult to imagine. Especially it’s worth noting that with only 3 people they are able to keep the company running most of the day. Like back when factories had to built close to an energy source, I still think that (IT) companies of today are constrained to some extend of having access to talents or other kind of infrastructure. Although I actually like going to the office (working zone) every day, I think it’s worth paying attention to the possibilities that are available for starting up a new company with very little “deadweight” like physical building, network, servers etc., when all of this can be distributed/hosted on the net.




GOTO Aarhus 2012 - Developers *are* writing functional tests :)

clock September 9, 2012 22:33 by author rasmus

Even though this is a part of my “warm-up” blog posts for GOTO Aarhus 2012 Conference, I’ll start out referring to a related event this week. At our latest meeting in the Danish TFS User Group, held at the Microsoft office in Hellerup, Rune Abrahamsson from BRFkredit had a presentation on how they are using http://cuite.codeplex.com/ for developing functional tests for one of their systems. If you are interested, Mads (our agile coach at ScanJour) has also blogged from the user group here.

Although the people attending the TFS Users Group meetings are mostly developers, we often have at least one testing-related topic on the agenda, and this time it was clear that many of the present developers does have experience with developing functional tests, often using a UI automation framework like Microsoft Coded UI. So even before attending the GOTO Aarhus 2012 Conference, it seems I can conclude that developers are writing functional tests, which pleases me, as I think the people writing the software, also are the best at writing automation for it. The testers can then do actual quality assurance, by ensuring that the customer requirements have been automated, as well as performing exploratory testing to exercise the software in new ways.

IMAG0007

Sorry for the bad picture quality; the slide shows at the bottom which areas (including functional UI tests) are covered by developer tests, whereas e.g. exploratory tests are performed by their manual testers (which are actually domain experts, not full-time testers). I find it very positive to see a development team take quality seriously, by including test automation, when they are in a situation where they are lacking testers. I guess this is not an uncommon scenario in the industry, that you have to get test assistance from other teams/departments.

And it makes me wonder if manual testers should be forced to not do manual scripted test at all, but only do exploratory testing and quality assurance, since the developers will fill the gap themselves, by automating the scripted test cases? Smile

 

To put this in the context of this years GOTO conference, I have been looking at the biographies of some of the speakers, and found a video by Steve Freeman on Sustainable Test-Driven Development, where he touches topics like:

  • Too coupled production and test code, which makes refactoring difficult (beginning of video)
  • Test code structure (similar to Given/When/Then pattern of e.g. SpecFlow)
  • “You come back to the code after 6 months, and forgot why you did this” (approx. 13:00)
  • Patterns for writing test code. As simple as good variable naming, using DSL syntax to have more more readable test code
  • Prepare for your test to fail at some point, simply by having clear error messages when it happens (“Explain yourself”, “Describe yourself” and “Tracer Objects” around 23 mins into the video).
  • Make your tests robust, e.g. “Only Enforce Order When It Matters” (around 40:00)
  • “Tests Are Code Too” (43:35), which is the last slide and also seems to be the headline for this presentation

The last slide is shown here:

image

Although all four bullets here are important, I have myself faced the “If it’s hard to test, that’s a clue” problem, when developing unit tests, both for production code as well as test automation framework code. As an automation tester, I also regularly have the problem when writing automated functional tests against production code, e.g. missing id’s on UI controls, no clear interface for testing etc. When the development team is writing automated functional tests as part of definition of done, it should hopefully result in code better suited for automation. And then the general conclusion, that test automation should be treated like any other code activity.

This year Steve Freeman has a session on Raspberry Pi with extra toppings Monday at 13:20, which unfortunately is a timeslot where I have also some other sessions I would like to see (What is value and Mythbusting Remote Procedure Calls), but it might be that I have to change my mind; as it could also be fun to get an introduction to the Raspberry Pi.

After watching this video, I feel confident that I will meet developers at the GOTO Aarhus 2012 Conference this year with a quality/testing mindset, and I’m looking forward to talk with you about your view on test automation, and how I as an automation tester can bring value and increase the quality of our software products, even if it's no longer a dedicated test automation developer writing the functional tests.

Feel free to comment on this post below.




GOTO Aarhus 2012 – Is it time for developers to move beyond unit tests?

clock August 27, 2012 12:05 by author rasmus

I’ve been so fortunate to have been invited as official blogger at the GOTO Aarhus 2012 conference this year, which most likely means a number of new readers will be reading this blog post, so I’ll start by giving a short introduction to myself.

10 years ago I shifted from “regular” software development to software testing with a job as SDET (Software Development Engineer in Test) in Microsoft Vedbæk (Denmark), after which a couple of years followed as software developer (but still with focus on test and quality assurance) until I started in my current job as technical lead for test automation in ScanJour two years ago. We run Scrum in teams of 5-8 persons, with usually two domain (manual) testers and one automation tester in each team. The role of the automation tester ranges from automating already defined manual functional test cases to performing load and performance tests.

Now I’ve got the chance to participate in a GOTO conference, which seems to be a true developer conference, by not having “Tester” as an option in the Role field when you register – I registered myself as “Other” Smile

clip_image001

 

So what’s the state of test automation among developers?

During the last 10-12 years in software development, I’ve seen how unit tests has helped to increase the quality of the products we ship, by often ensuring the most obvious bugs being caught early, but not least ensuring the individual classes/units are actually testable (otherwise it wouldn’t be possible to write unit tests against them).

But I still face problems when trying to write automated tests, due to lack of “testability” of the products in the following areas:

  1. No silent installation/uninstallation, or not officially supported (so if you report bugs, they are closed “by design”)
  2. Missing unique id’s on controls in webpages when doing UI tests, which makes it harder to write test automation and makes test cases less stable
  3. No clear testable layers in the application, which often means you have to resort to doing automation on the UI layer
  4. Test is not done in parallel with development, but sometimes when the developer has started working on a new task or even in a later sprint (unfortunately)

#4 is not only a developer-issue, but still a problem for the team, because it means that you risk finishing/shipping code, which either contains bugs or you find out is not testable.

Inspired by ISTQB (a certification for testers), I see the following levels of software tests (simplified, there also exists load test etc.), with unit tests being closest to the code and of course automated, and release acceptance testing often (but not necessarily) being a manual testing activity.

clip_image002

I highly value that testing should be independent from development (even if part of same Scrum team), simply to get another pair of eyes to look at the system; I’ve seen many “idea-bugs” (not understanding customer needs correctly) being caught early by domain testers, that could otherwise have easily slipped all the way to the customer (where the cost of fixing it is higher).

Regarding the issues I raise above concerning lack of testability, I think many of these could be solved by having the software developers developing more automated functional testing, in the same way that unit tests ensures some degree of testability of units/classes. After all it’s in the interest of the whole team to deliver working software to the customers, as effectively as possible.

So the questions I’ll try to get answers to while being at GOTO Aarhus 2012 conference are:

  • What’s the state of automated functional test at all?
  • Are software developers already automating non-unit tests like e.g. functional UI tests?
  • Is it common to have automation testers in the industry (like my current job), is automation a part of usual software development activities or have you actually been able to successfully do capture/replay test automation maintained by domain/manual tester? (I would be surprised if the latter is case)

You are certainly already now welcome to give me input to these topics, by leaving a comment below, thanks.

PS. If you are interested in what GOTO conference sessions a tester finds interesting, you can see my schedule here.




TechEd Europe 2012 – Friday

clock July 1, 2012 09:42 by author rasmus

Last day of TechEd, we started by attending WSV414 - Advanced Automation Using Windows PowerShell 3.0 showing new features in PowerShell 3.0 and in general how Microsoft has improved import for automation of administrate tasks in Windows Server 2012 and all new Server products as well. As also mentioned in the first keynote, cloud-computing is an important area for Microsoft, and a part of this transition for Windows to become a “cloud-enabled OS” (left slide) is to allow remote administration of Windows machines through PowerShell cmdlets. Right slide shows some of the new features in PowerShell targeting new users (like me…) of which I especially look forward to have IntelliSense support in the PowerShell ISE.

IMG_1128 IMG_1132 

The “Common Engineering Criteria” is the name of the effort by Microsoft to “PowerShell-enable” all administrative tasks in Windows Server and Server products in general, and having it available on several “end-points” like remoting and Web Access, which enables you to simply open an url on the server and execute PowerShell commands in a “command window”-like interface:

IMG_1137 IMG_1141

Some of the improvements to PowerShell ISE (including IntelliSense) and how to call and display data from a REST-based webservice:

IMG_1147 IMG_1151

We have started developing PowerShell scripts in ScanJour for deploying our test environments. With the new features in PowerShell 3.0, it looks to be a lot easier to get started authoring PowerShell scripts.  The session didn’t mention anything on testing PowerShell scripts, but I think there soon will be a need to be able to write unit tests for PowerShell scripts as well, as they easily contains just as much logic as a small C# program. Maybe Fakes (which has been mentioned several times during the conference) can help in mocking external dependencies for PowerShell scripts as well?

 

DEV324 - A Modern Architecture Review: Using the New Code Review Tools was held by a speaker from Australia, which he started to talk about (and even show a commercial for an Australian beer), but the session got up to speed and did contain some interesting topics about code review. Not so much about the new “request codereview” feature in Visual Studio (he actually only showed slides of this), but more in general about how to use the tools like Layer Diagram (middle slide - requires Visual Studio Ultimate) or NDepend (right slide - commercial tool) for getting an overview of a codebase, and find “weak spots” in the code.

IMG_1154 IMG_1157 IMG_1160

Also of course using Code Analysis in Visual Studio for finding potential issues in the code. Here the speaker suggested to create a “Product Backlog Item” for a code warning like a potential SQL injection. Personally I think it depends on the situation how to solve it. Optimal solution would just be to talk to the developer and have it fixed, but if that’s not possible for whatever reason, I think a potential SQL injection is serious enough to get a bug (my concern is simply that it won’t get fixed otherwise…)

IMG_1162

An interesting “definition of bug” from the speaker was that every bug should have an IntelliTrace attached. I like the idea, and we get closer by the ability to collect IntelliTrace files on production servers with Visual Studio 2012, but there probably still is some time before we get to this point.

 

In DEV412 - Identify & Fix Performance Problems with Visual Studio 2012 Ultimate the speaker started with talking about the importance of starting load testing from day one at a project, in order to identify performance issues as soon as possible (left slide). Middle slide gave an overview of the tools support in different flavors of Visual Studio and to the right an overview of the steps in order to identify a performance issue in a web application (where he had injected various sleep statements and for-loops).

IMG_1168 IMG_1169 IMG_1171

A pattern for load tests is to make the load test only test functionality of one user story, again in order to ease identification/isolation of performance bottlenecks if they should occur, and also serves as a starting point for implementation of load tests. Right slide gives an overview of setup for test controller and agents in order to run distributed load tests. At this point there was a question from the audience on using CodedUI tests for load tests, to which the speaker made it clear that even though it could be possible to use CodedUI testing for load testing scenarios, it’s certainly not the purpose of CodedUI tests and it’s much simpler to achieve same result using a dedicated load/stress testing framework. Seems this speaker knows his testing tools well.

IMG_1172 IMG_1176 IMG_1178

Last part of the presentation was on the profiling tools in Visual Studio 2012. First an overview of the different types of profiling (of which he used “Sampling” for his presentation) and an example of comparing profiling results before and after some bottlenecks in code was identified and changed.

IMG_1181 IMG_1184

From my experience profiling is a very effective way to see if code behaves as expected.

 

Last session for me at this conference was DEV301 - ASP.NET Roadmap: One ASP.NET - Web Forms, MVC, Web API and more which covered new features in ASP.NET 4.5 (left) with focus on the new “One ASP.NET” feature. As you can see on the right slide, the feature is not finished yet, but should be included in RTM of Visual Studio 2012.

IMG_1186 IMG_1187 

As I understood this feature, it allows you to create multiple web frontends for the same application, which is showed below where we have MVC app (left) displaying the same information as the WebPages app on the right. Might be useful when having a website with both a rich browser-targeted and a simple mobile frontend.

IMG_1189 IMG_1190

And then there will support for OAuth of which he showed how to implement login using Google account.

IMG_1191 IMG_1192

Unfortunately my camera went out of battery at this point, so no more pictures, but think I got the most interesting parts of the conference covered. Hope you have enjoyed reading these posts from the conference. At least it has been very valuable for me every evening to go through my notes and pictures, since all the impressions you get on such a conference easily gets mixed together.




TechEd Europe 2012 – Thursday (part two)

clock June 28, 2012 22:40 by author rasmus

Today I went to one of the lunch sessions. A little strange name, as you cannot bring you lunch with you to these sessions, as they are held in the usual rooms only containing chairs. They are just placed in the lunch break timeslot, and seems to be a possibility for the exhibitors to indirectly demo their products.

The topic of this session was “A Live Demo Introduction to the Kinect API”. I had hoped that the session would show ways to actual use a Kinect, but was more a presentation of a layer on top of the Kinect API that the speaker had developed. I guess there must be possible usage scenarios for Kinect-enabled software (also in business applications), but this session didn’t show any.

IMG_1100

 

Couldn’t find any testing related sessions in next timeslot, so went to DEV303 - ASP.NET Loves HTML5 on support for HTML5 in Visual Studio 2012. Also attended the workshop yesterday where we developed a Metro style application for Windows 8 using javascript, and surely it seems that use of HTML5/javascript is increasing. An example where use of HTML5 for making rounded corners, reduces 37 lines of CSS and HTML to 5 lines (left). On the right slide showing a new feature in Visual Studio to start up a different browser when debugging the web application.

IMG_1104 IMG_1105

Then the session showed improvements to ASP.NET Server controls, e.g. FileUpload that has been extended with support for multiple files (since this is possible in HTML5), improvements to CSS editor and intellisense/compile-time-check of javascript code. Certainly useful for javascript developers, and my impression is that javascript will become increasingly popular.

 

Last session of the day was DEV312 - Creating Robust, Maintainable Coded UI Tests with Visual Studio 2012 presented by Brian Keller, which I of course had to see Smile

IMG_1114 IMG_1115

As you can see from the “Objectives” slide above, this is basically the same talk he gave at TechEd last year, now just “upgraded” to TFS 2012. According to Brian Keller, the equivalent session at TechEd North America 2012 contained some more in-depth features presented by a member of the dev team for CodedUI Framework, but apparently this meant that some of the audience was “lost”. My feedback to Brian will be that he should probably split this session into two, one introduction (as this session more or less was) and then an advanced for people that have worked with CodedUI tests.

Some of the new features of CodedUI in Visual Studio 2012/TFS 2012 are different types of testing projects (left), a new field for error message in case an assertion fails (middle) and the most interesting where screenshots are taken for each test step (right). I think this last feature is actually more valuable than a video recording, as you get a picture per step, instead of having to find where in a video recording a specific step has been performed:

IMG_1116 IMG_1117 IMG_1122

Of course he showed the “action recording to CodedUI test” feature, where you transform a manual recorded action recording from Test Manager into C# code. I don’t think this will ever work, as it generates multiple UIMaps, and therefore I don’t think this can be categorized as “maintainable”…

 

The official part of TechEd today ended with a “Ask the experts” session where we had a change to talk with Microsoft employees about various issues and questions on our current TFS 2010 setup. Really valuable, and I really appreciate this kind of opportunities to speak directly with the people actually developing the products. Yet another fine day at TechEd.

After this the EURO 2012 semi-final match between Germany and Italy was shown in the "Theater" room, with burgers and beers for dinner, even though this wasn't planned originally. Again I'm impressed by the hospitality of Microsoft and the TechEd organizers.




TechEd Europe 2012 – Thursday (part one)

clock June 28, 2012 22:01 by author rasmus

First session today was DEV345 - The Accidential Team Foundation Server Admin, which didn’t quite stick to the subject, but was more about differences between TFS 2010 and TFS 2012 and how to upgrade, and not general information for people who has taken over responsibility of a TFS Server from someone else.

One of the most important notes from this session was to run Best Practices Analyzer Tool for Team Foundation Server, not only once or twice but on a monthly basis. Some other interesting slides on what is automatically enabled when upgrading from TFS 2010 to TFS 2012 (left) and what has to be manually enabled (right):

IMG_1058 IMG_1060 

 

And some slides on permissions in TFS, which is actually split into TFS, SharePoint and SQL Server Reporting Service at the left. The middle slide (apologize for the bad quality) shows the different permission names in the different parts and finally a “pattern” for configuring one AD per role per team project:

IMG_1064 IMG_1065 IMG_1066

This separation of permissions has not been changed for TFS 2012, so it’s still required to set permissions in TFS, SharePoint and SQL Server Reporting Services. http://tfsadmin.codeplex.com/ was presented which might can help setting permissions.

 

A new feature of TFS 2012 is to create “teams” within a Project, which will the get own product backlog, burndown charts etc. Hope this feature can reduce number of necessary projects in TFS, as projects introduce a lot of limitations on branching, lab environments (which cannot be deployed across TFS projects).

IMG_1070

 

Next I went to DEV340 - Taking Your Application Lifecycle Management to the Cloud With the Team Foundation Service which showed

  • Setting up a hosted TFS on http://tfspreview.com
  • Creating a new project
  • Create a simple MVC 4 solution, check in code and configuring build definition(s) for compiling code on TFS server
  • Create website on Azure
  • Configure deployment build definition, to deploy your website to Azure

All this was done within approx. 30 minutes, and is basically all it takes for a company to set up the infrastructure for source control, builds, work item tracking and deployment. If I should emphasize one thing from this TechEd conference, this is probably it.

Also learned from this session that the TFS team is working in 3 weeks sprints, and releasing to tfspreview.com after every release, which have given them a lot of really valuable feedback from customers on features to improve. Sounds like a really interesting success story on releasing often to production. And for Microsoft it’s also a big change compared to earlier versions of TFS, where they could only ship new feature with service packs. And that Microsoft have 4 people operating the tfspreview.com site, running thousands of accounts. Of course there is still work to do for the "on-premise" TFS admin, but certainly it sounds there is a potential optimization in costs for running TFS this way.

From what I have seen of TFS 2012 here at TechEd, I must say that the web UI is really impressive and in general it seems Microsoft have improved TFS on a lot of areas, and really hope to be able to upgrade as soon as possible when TFS 2012 has been released.

Here are slides on terms for using TFS Preview and currently supported features:

IMG_1084 IMG_1085

 

The worst session I have attended so far was OSP432 - Application Lifecycle Management: Automated Builds and Testing for SharePoint projects (if the speaker reads this, I’m sure he agrees Smile). First slides started out by showing which problems to solve. So far, so good, but note the right slide here mentioning use of PowerShell remoting for deploying the SharePoint solution to a test environment.

IMG_1087 IMG_1088 IMG_1089

This is built into TFS as the build-deploy-test feature, but nevertheless he showed how to customize a build process template in order to execute a PowerShell script.

IMG_1095

This is surely not the way to do it, and if you’re a SharePoint developer reading this, make sure to use the standard deployment build process template shipped with both TFS 2010 and TFS 2012. He also showed how to create a CodedUI test using the standard recorder tool shipped with Visual Studio. Nothing new for me there, except the fact that CodedUI tests can be used for automating SharePoint UI tests. And then he did the mistake of editing the UIMap designer.cs file. The designer.cs file is autogenerated by VS (as it says in the top of the file), and should of course never be modified. See the file name in upper left corner of image:

IMG_1093

 

Breaking the post today into two parts, as it has become quite long already. Go to second part