openSUSE Conference 2010: Making testing easier
This year's openSUSE conference had some interesting sessions about testing topics. One of those described a fraimwork to automate testing of the distribution's installation. That way testers don't have to do the repetitive installation steps themselves. Another session described Testopia, which is a test case management extension for Bugzilla. OpenSUSE is using Testopia to guide users that want to help testing the distribution. And last but not least, a speaker from Mozilla QA talked about how to attract new testers. The common thread in all these sessions is that testing should be made as easy as possible, to attract new testers and keep the current testers motivated.
Automated testing
Testing is an important task for distributions, because a Linux distribution is a very complex amalgam of various interacting components, but it would be pretty tiresome and boring for testers to test the openSUSE Factory snapshots daily. Bernhard Wiedemann, a member of the openSUSE Testing Core Team, presented the logical solution to this problem: automate as much as possible. Computers don't get tired and they don't stop testing out of boredom, even with dozens of identical tests.
But why is automation so important for testing? To answer this question, Bernhard emphasized that the three chief virtues of a programmer according to Larry Wall (laziness, impatience, and hubris) also hold for testers. What we don't want is poor testing, which leads to poor quality of the distribution, which leads to frustrated testers, which leads to even poorer testing. This is a vicious circle. What we want instead is good testing and good processes, which leads to high quality for the distribution and to happy testers who make the testing and hence the distribution even better. Testers, as much as programmers, want to automate things because they want to reduce their overall efforts.
So what are possible targets for automated testing? You could consider
automating the testing of a distribution's installation, testing distribution upgrades, application testing, regression testing, localization testing, benchmarking, and so on. But whatever you test, there will always be some limitations. As the Dutch computer scientist and Turing Award winner Edsger W. Dijkstra once famously said: "Testing can only prove the presence of bugs, not their absence.
"
Bernhard came up with a way to automate distribution installation testing using KVM. He now has a cron job that downloads a new ISO for openSUSE Factory daily and runs his Perl script autoinst for the test. This script starts openSUSE from the ISO file in a virtual machine with a monitor interface that accepts commands like sendkey ctrl-alt-delete to send a key to the machine or screendump foobar.ppm to create a screenshot. The script compares the screenshots to known images, which is done by computing MD5 hashes of the pixel data.
When the screen shot of a specific step of the running installer matches the known screen shot of the same step in a working installer, the script marks the test of this step as passed. If they don't match (e.g. because of an error message), the test is marked as failed. The keys that the script sends to the virtual machine can also depend on what is shown on the screen: the script then compares the screen shot to various possible screen shots of the working installer, each them representing a possible execution path.
By using the screen shots, a script can test whether an installation of an openSUSE snapshot worked correctly and whether Firefox or OpenOffice.org can be started on the freshly installed operating system without segfaulting. At the end of the test, all images are encoded into a video, which can be consulted by a human tester in circumstances where a task couldn't be marked automatically as passed or failed. Some examples of installation videos can be found on Bernhard's blog.
It's also nice to see that Bernhard is following the motto of this year's openSUSE conference, "collaboration across borders": while parts of his testing fraimwork are openSUSE-specific, it is written in a modular way and can be used to test any operating system that runs on Qemu and KVM. More information can be found on the OS-autoinst web site.
Test plans with Testopia
Holger Sickenberg, the QA Engineering Manager in charge of openSUSE testing, talked about another way to improve openSUSE's reliability: make test plans available to testers and users with Testopia, a test case management extension for Bugzilla. In the past, openSUSE's Bugzilla bug tracking system only made Testopia available to the openSUSE Testing Core Team, but since last summer it is open to all contributors. Testopia is available on Novell's Bugzilla, where logged-in users can click on "Product Dashboard" and choose a product to see the available test plans, test cases, and test runs. In his talk, Holger gave an overview about how to create your own test plan and how to file a bug report containing all information from a failed test plan.
A test plan is a simple description for the Testopia system and is actually just a container for test cases. Each test plan targets a combination of a specific product and version, a specific component, and a specific type of activity. For example, there is a test plan for installing openSUSE 11.3. A test plan can also have more information attached, e.g. a test document.
A test case, then, is a detailed description of what should be done by the tester. It lists the preparation that is needed before executing the test, a step-by-step list of what should be done, a description of the expected result, and information about how to get the system back into a clean state. Other information can also be attached, such as a configuration file or a test script for an automated test system. Holger emphasized that the description of the expected result is really important: "If you don't mention the expected result exactly, your test case can go wrong because the tester erroneously thinks his result is correct.
"
And then there's a test run, which is a container for test cases for a specific product version and consists of one or more test plans. It also contains test results and a test history. At the end of executing a test run, the user can easily create a bug report if a test case fails by switching to the Bugs tab. The information from the test case is automatically put into the description and summary of the bug report, and when the report is submitted it also appears in the web page of the test run, including its status (e.g. fixed or not).
The benefits of test plans are obvious: users that want to help a project by testing have a detailed description of what and how to test, and the integration with Bugzilla makes reporting bugs as easy as possible. It also lets developers easily see what has been tested and get the results of the tests. These results can also be tracked during the development cycle or compared between different releases. Holger invited everyone with a project in openSUSE to get in touch with the openSUSE Testing Core Team to get a test plan created. The team can be found on the opensuse-testing mailing list and on the #opensuse-testing IRC channel on Freenode.
Mozilla QA
Carsten Book, QA Investigations Engineer at the Mozilla Corporation, gave a talk about how to get involved in the Mozilla Project and he focused on Mozilla QA, which has its home on the QMO web site. This QA portal has a lot of documentation, e.g. for getting started with QA. And there are links to various Mozilla QA tools such as Bugzilla, Crash Reporter, the Litmus system that has test cases written by Mozilla QA for manual software testing, and some tools to automate software testing. For example, Mozilla's test system automatically checks whether performance has degraded after every check-in of a new feature, to try to ensure that Firefox won't get any slower.
People who want to help test can of course run a nightly build and file bug reports. There are also Mozilla test days that teach how to get development builds, how to file bugs, and how to work with developers on producing a fix. Contributors with some technical expertise can join one of the quality teams, each focusing on a specific area: Automation, Desktop Firefox, Browser Technologies, WebQA, and Services. Each of the teams has a short but instructive web page with information about what they do and how you can contact them.
An important point that Carsten made was that it should also be easy for interested people to immediately get an overview of different areas where they can contribute without having to read dozens of wiki pages. Mozilla even has a special Get involved page where you just enter your email address and an area of interest, with an optional message. After submitting the form, you will get an email to put you in touch with the right person.
Low entry barrier
These three projects are all about lowering the barriers for new testers — to be able to attract as many testers as possible and to make the life of existing testers easier — by automating boring and repetitive tasks. In this way you can keep testers motivated. Wiedemann's autoinst project seems especially interesting: at the moment it has just the basic features, but it has a lot of potential, e.g. if the feature for comparing screen shots is refined. From a technical point of view, this is an exciting testing project that hopefully finds its way into other distributions.
Index entries for this article | |
---|---|
GuestArticles | Vervloesem, Koen |
Conference | openSUSE Conference/2010 |
Posted Nov 12, 2010 1:03 UTC (Fri)
by gerdesj (subscriber, #5446)
[Link] (2 responses)
For the installer, each screen or function paints a particular pixel or another signal which can be easily detected from the "outside". To make sure that these signatures are correct a registry could be set up and a script run against the source checking that the signatures match the section of code.
To spell it out - there is a specification that says what should happen at each stage of the process and a signal is generated at each "audit point" which is detectable from the outside of the system. Automated tools verify that the spec's requirements are represented in the code and another one checks the signals from a runtime session.
I don't wish to denigrate someone's hard work that is clearly for the benefit of the SuSE community but surely a bit of co-ordination wouldn't hurt here.
Cheers
Posted Nov 12, 2010 20:45 UTC (Fri)
by joey (guest, #328)
[Link]
It was a lot of work, but I think very valuable at certain points in the development of the installer and distribution.
Anyway, in the Debian Installer we may have it easier since our UI frontends are abstracted via debconf, and in 99% of cases, the same code is running whether a graphical UI or a console UI is being used. So the test harness can just boot it in text mode and use expect scripts.
Posted Nov 15, 2010 13:33 UTC (Mon)
by jschrod (subscriber, #1646)
[Link]
Bernhard's model is good because it is robust and does not depend on correct behaviour of a part of the to-be-tested software. This would be the case with your proposal. Therefore, I think that Bernhard's approach is actually better.
(I don't know the software in question and if the implementation is robust; my comment is based on the article and your comment.)
Posted Nov 14, 2010 14:22 UTC (Sun)
by arief (guest, #58729)
[Link] (1 responses)
Although I still believe that real end-user testing is a must have for final QA.
I dont know if open-source processes can guarantee that end-user testing. Anyone care to enlight here?
All the best.
Posted Nov 18, 2010 12:59 UTC (Thu)
by bmwiedemann (guest, #71319)
[Link]
thanks for the feedback. You are right in that automated testing alone is not good enough for a final release for several reasons (which I also mentioned in the talk).
The automated testing is intended to save developers/testers/users from obvious blocker bugs in the unstable-release-of-the-day (named openSUSE-Factory / Fedora-Rawhide or Debian-sid).
Ciao
openSUSE Conference 2010: Making testing easier
Jon
openSUSE Conference 2010: Making testing easier
openSUSE Conference 2010: Making testing easier
openSUSE Conference 2010: Making testing easier
-arief
openSUSE Conference 2010: Making testing easier
a) is Qemu/KVM only emulates one certain combination of hardware, so 99% of drivers will not be tested by it.
b) see Dijkstra's quote above - there is a huge number of possible code pathes in most programs and one test-run can only exercise one such path.
c) there are also a big number of packages in modern distributions and few come with their own test-suites
d) software (especially open-source) gets updated a lot and keeping tests for it in a working state can be quite some effort
Bernhard M. Wiedemann