Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Kill more bugs! Add randomization to your web testing

public://pictures/paul_headshot_2018.jpg
Paul Grizzaffi Principal Automation Architect, Magenic
 

In his book Software Testing Techniques, Boris Beizer describes the Pesticide Paradox. In the context of software testing, it says that no matter what testing method you choose, you will still miss subtler "pests," or software bugs.

Beizer's explanation is that pests will no longer exist in the places where you've applied pesticide; you'll find them only where you haven't applied it. The analogy to testing is that, over time, you'll find fewer and fewer bugs in the parts of your code that have been highly tested, and the bugs that users do find will be in the areas that you have tested less rigorously.

So how do you address that? Expand your testing coverage by adding fuzzing to your process.

Fuzzing explained

Roughly speaking, fuzzing is testing without knowing what a specific outcome should be. When fuzzing, you don't necessarily know what should happen, but you have a good idea of some things that shouldn't, such as 404 errors, server crashes, or application crashes.

As a tester, you can use fuzzing to help uncover these kinds of errors when you're testing text box widgets on a GUI or web page. Testers take blocks of potentially problematic text and enter them into the text boxes to see if anything bad happens. Sometimes, the blocks are arbitrarily generated characters, adding a dimension of randomness to the testing. But why should text boxes have all the fun?

Today’s websites are highly interconnected, multi-server, multi-vendor applications that include connections to out-of-network servers that are controlled by neither the applications nor the team. This situation makes it difficult to both enumerate and control all the possible paths through your system. 

Even if all possible paths could be identified, most organizations would not have the time to test and evaluate the results of all these scenarios, regardless of whether they apply automation to help with that testing. Fuzzing based on randomness at the UI level, specifically via browser clicks, can provide a look at additional code paths, particularly those that are valid but are not immediately intuitive.

[ Special Coverage: STAREAST Conference 2019 ]

Build your own random clicker

A random clicker is a program that clicks on random items that are clickable (buttons, hyperlinks, etc.), applying various heuristics to determine if something weird happened. In this way, you are essentially fuzzing with browser clicks.

The above description may sound vague or complicated, but it's not. You can build one of these yourself, often with very little effort. For the typical website, the basic browser fuzzing steps are:

  1. Navigate to a start page.
  2. Randomly click an <a> tag.
  3. Did you find anything weird?
  4. If so, save information about what's weird, then go to Step 1.
  5. If not, save information about where you currently are, then go to Step 2.

From this basic algorithm, you can see that it doesn't take a lot of code or effort to build a rudimentary version of a clicker.

You can probably see places where you might modify the algorithm to make it even more valuable for your unique needs. This is part of the allure of a tool like this; it's relatively cheap to build and execute, and it can expose problems that your existing testing might not have seen.

What to consider before using randomization

One of the reasons testers are reluctant to adopt randomization is concern about reproducibility. Your automation has little value if you can't reproduce the situation that caused a specific unexpected behavior. Without reproducibility, it's harder to debug a potential issue, and your team can't assess whether or not it has fixed the issue.

To aid in reproducibility, a random clicker leaves a trail of breadcrumbs. That is, it logs things that are likely to be of interest to someone who is trying to determine whether something weird should be considered an issue. These logs are also interesting to someone who is debugging an issue, or to someone who is testing that an issue has been resolved.

The trail of breadcrumbs can include:

  • Logs for each page visited
  • Screenshots for each page visited or for each weird state
  • Which heuristic caused a state to be reported as weird

This is assistance, not testing

This is not traditional, test-case-based automation. Traditional automation typically focuses on taking test cases that humans perform and has computers perform those activities instead. The desired behavior of this automation is to produce pass/fail or green/red results.

Instead, the type of browser fuzzing described here facilitates each actor working to its strength: Computers do the grunt, repetitive work, while humans do the cognitive work of deciding if a specific weirdness constitutes a problem.

More specifically, the random clicker produces two "piles" of results: one pile of clicks where no problem was detected, and a hopefully smaller pile that consists of clicks where something weird happened. The tester then inspects the results, typically focusing on the weirdness pile, deciding which results indicate a problem and which are false positives.

If you have a high number of false positives, you should report that to the tool maintainers so they can adjust the offending heuristic. Unfortunately, since we are dealing with heuristics here, it may not be possible to make adjustments that reduce false positives without also causing false negatives—that is, reporting something as not weird that really should be labeled as weird.

In cases like this, removing the problematic heuristic may be the best option, particularly if the effort needed to investigate the false positives outweighs the value produced by finding legitimate issues.

[ Also see: Intro to fuzz testing: How to prevent your next epic QA fail ]

Don't be daunted

Adding randomization to your testing may seem daunting, but it need not be so. If your company and your users are happy with your product's quality, perhaps there isn't sufficient value in randomization right now.

Similarly, if your team is struggling to address the issues that your current testing approach is revealing, you may not have the bandwidth to handle randomization.

If, however, applying "pesticide" to additional areas of your property might be valuable to you, consider using randomization to help uncover bugs in those areas. Just keep in mind the considerations above, so that when you discover a new weirdness, you have the data you need to classify it as an issue or nonissue.

For more about how to build a random clicker for high-volume, automated testing, come to my presentation, "Well, That's Random: Automated, Fuzzy Browser Clicking," at the STAREAST software testing conference, which runs April 28–May 3 in Orlando, Florida. TechBeacon readers can save $200 on registration fees by using promo code SECM. Can’t make it? Register for STAREAST Virtual for free to stream select presentations.

Keep learning

Read more articles about: App Dev & TestingTesting