Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Shift your UI testing to sprint zero by leveraging mockups and AI

public://pictures/donjackson.jpeg
Don Jackson Chief Technologist, Application Delivery Management, Micro Focus
 

Have you ever noticed that testing is always late? I do mean always late. Testing, in fact, delays software delivery. You have to wait for the testing to happen, fix the bugs, then wait again.

Automated user interface (UI) tests might make the retest fast, but they really run and save time only once the blocking defects have been removed. With test-driven development, programmers found a way to at least integrate the testing practice so that the working code pops out with a working test suite.

UI tests, however, have been generally late, slow, and expensive to maintain. That may be about to change, however. After a decade of promising potential, artificial intelligence (AI) has finally made effective test-first possible for UIs. Here's how.

Test-first for user interfaces

Behavior-driven development (BDD) was an early attempt to describe requirements as examples. In practice, teams would come up with requirements in the form of "given / when / then." For example:

Given I am logged in
When I search for "teddy ruxpin bear"
And I click "Teddy Ruxpin"
And  I click "add to my cart"
Then my cart consists of one Teddy Ruxpin 

That allowed the programmers to at least know what the software should do at a detailed level and prevented friction. Still, unless the UI elements are defined, the actual "wiring" of the UI tests can only happen once the computer program exists.

Again, testing is late, and that wiring usually consists of XPATH, CSS, or some other sort of code to the button. If the buttons move, if the links change text, or if a link appears with the same text earlier in the page, the test will have a false failure and need to be fixed. Some experts, such as Angie Jones, have had some success in designing the entire user interface up front, on paper, so the test code can be created alongside the production code. This still has the problem of locator strings.

Most organizations seem to lack the discipline to create a screen as detailed as Jones suggests. Of those that do, few will spare any programmers capable of writing production code to become software development engineers in test and write the test tooling. 

To be successful at test-first, the software needs to do more than just click a locator; it needs to recognize the locator. Instead of a code such as //div//span[text()='About the company'], natural-language processing (NLP) and semantic modeling can take a phrase such as "click the login button" and find the button, even if the button is not called "log in"—just like a human could. 

Doing that requires a model of connected idioms and terms, which is completely possible with today's tech. A model that powerful, combined with NLP, allows the tests created off a whiteboard sketch to become test automation.

Moving to sprint zero

Even with Scrum and other approaches, the people in finance and the executive suite still want to know how long development will take. That takes us to sprint zero, where projects are designed, funded, and mocked up. Those mockups might be drawn on a whiteboard, with the involvement of UI designers, who make them web pages or detailed designs.

A project manager would call this the "inception" phase; a Scrum enthusiast might call it sprint zero.

A human looking at the user interface could infer what should happen when things are clicked; a complex wireframe might even allow them to click through it. Sadly, where a human might recognize a shopping cart, a magnifying glass, or a hamburger menu—and even know what to expect for workflow—a computer will not.

Until now.

Computer vision is an AI technique that trains images to recognize symbols. Send it to a Google image search for shopping cart, have a human train the guesses, and the software can recognize a shopping cart. That makes it possible to create commands to click, type, and assert values based entirely on a mockup.

This kind of true AI goes by symbols, not a bitmap. That means a single test can be reused on many platforms. Computer vision can find what humans would recognize as the same symbol on Android, iPhone, and the web, even when that symbol changes. That means the code is less brittle, because a changed icon or a button in a different place will not cause an error. That makes for a very powerful aid to testing, but would still require a strong technical person to code—until you add NLP.

Mix the power of human and machine

As easy as BDD starts out, at some point a programmer has to explain to the computer what "search for Teddy Ruxpin" means. Traditional code does not understand idioms. For example, a computer would not know that "checkout" means click on the checkout button. So some programmer has to write:

browser->wait_for_element("id=checkout");
object checkout = get_element(id=checkout");
browser->click();

Computer vision can save the programmer from having to look for the ID and allow the software to succeed should the ID change but the text remains. With just computer vision, some programmer needs to write that code, perhaps two or three times for different desktop or mobile platforms.

With NLP that code might be "click checkout button." No one has to write the code, either. The NLP engine watches a user testing the software, then creates the code, which looks like idiomatic English. That is, it looks like what a human being would say, not like a computer programming language.

If a test fails, the software can use computer vision to look at the screen the way a human would and try to make the change. If the attempt works, the test software can update the expectations for the checkout button behind the scenes. That means one high-level test that maintains its own specifics for the web, for Android, for iOS, Windows, or other platforms.

Instead of having to rely on the rare and expensive SDET, the organization can take the existing domain experts, analysts, and testers and have them creating tests from sprint zero.

A vision of AI in testing

There are plenty of gurus that say the promises around AI are all hype and that companies are not really using AI in testing. If the goal is to get a big red button called "test this" that goes to a web page and automatically knows what to test and spit out results, those gurus are absolutely right.

Instead, this is what this technology can do.

Computer vision makes it possible to create tests against a mockup that can recognize symbols. They can continue to recognize those symbols when the UI exists on multiple platforms. NLP makes it possible to have a traditional tester, subject-matter expert, or analyst record, save, and modify tests without having to learn to code.

Instead of doing this a sprint behind, or a release behind, the tests can exist, and even provide feedback, before the code is complete. That allows us to fulfill Heusser's mandate of software engineering: "The story isn't done until the tests run."

This all started with the observation that testing is always, by definition, late. These technologies change that, which is kind of a big deal. Computer vision and NLP exist right now, today, in tools that can transform the way test automation is done. Are you going to take advantage of them—or will you leave that up to the competition?

Keep learning

Read more articles about: App Dev & TestingTesting