Micro Focus is now part of OpenText. Learn more >

You are here

You are here

AI in testing: What it is, and why it matters

public://webform/writeforus/profile-pictures/paulmerrilldatastrategiesintestingheadshot.jpg
Paul Merrill CEO, Test Automation Consultant, Beaufort Fairmont
 

AI in testing is becoming mainstream. Some 21% of IT leaders surveyed said they are putting AI trials or proofs of concept in place, according to the 2020-21 World Quality Report. Speaking to longer-term trends, only 2% of respondents said AI has no part in their future plans.

If you've been waiting out the AI hype, it's time to dive in. Here's what you need to know about AI in testing.

What are the elements?

The most essential foundations of AI as applied to software testing are machine learning and neural networks. Machine learning allows computers to classify objects or make predictions about likelihoods based on the data it has. Neural networks loosely emulate how the part of the brain that predates humans make associations.

When used in part or together in testing, these subtypes of AI lend themselves to specific activities in testing.

For the most part, these activities include:

  • Discovering the actions that testers may take to interact with the system under test (SUT)
  • Classifying the outcomes of testing activities as likely defects
  • Calculating the likelihood of an outcome being a defect
  • Associating events or activities of testing with outcomes

It's important to also know what AI in testing cannot accomplish. This includes:

  • Identifying the purpose of a set of testing actions
  • Creating or discovering software testing oracles

A software testing oracle, as defined by Dr. Cem Kaner, "is a tool that helps you decide whether the program passed your test." This can include specific expected data or visuals present in a system after a test case runs that can be compared to actual outcomes. 

Getting this verification, though, is often hard-won, and it happens through conversations with product owners and not just through tools. Oracles often litter the blank space between our feature definitions and the margins of our backlogs. They often exist in the silence within our conversations and the implicit meanings of our words. It is still a very human and courageous endeavor to tread these waters with our counterparts. And it is the thing that makes us testers.

How does it work? 

Most of today's AI tools solve testing challenges by:

  • Visually comparing images of applications and reporting differences (visual tools)
  • "Learning" many times more application interactions than humans can
  • Comparing outcomes or states of the system with previous or known "good" states
  • Characterizing outcomes of existing testing and rolling up vast change sets so that humans can easily digest them
  • Remembering which outcomes are good or bad, and pattern-matching new outcomes

These tools are becoming more commoditized all the time, with increasing numbers of commercial and cloud-based alternatives available via APIs. Also, just about every testing vendor has an AI offering, often embedded into existing test software, or is working on one. Unfortunately, there's not much yet from the open-source community. 

What are potential use cases, and why is it important?

Today's applications interact with others through APIs, leverage legacy systems, and grow in complexity from one day to the next in nonlinear fashion. This and other development trends makes testing incredibly complex, and AI will be able to reduce the burden on human testers.

Here are a few examples of where AI testing will fit:

  • Help select, manage, and drive SUTs faster, more effectively, and less expensively than we can today
  • Use data in your existing QA systems (defects, resolutions, source code repo, test cases, logging, etc.) to help identify problem areas in the product
  • Automatically create and manage test data
  • Reduce the amount of "dirty work" for humans regarding implementing, executing, and analyzing test results

Further, AI in testing could reduce QA costs from 28% of IT budgets down to single digits, according to the World Quality Report.

Challenges with AI testing

The biggest challenge for AI-based testing is also one of the biggest in general testing. 

The most important element of testing is trust. It's also the easiest component to lose. To mitigate risks, you must identify them. We'd rather estimate higher risk and have others argue the risk level down than miss a risk or underestimate it. We'd rather identify false positives for defects than miss one.

This basic set of preferences pushes testing toward the path of lost trust. Developers get tired of potential defects that aren't. Product owners grow weary of testers asking about business impacts that aren't high-risk. Entire teams can fall into frustration with testing and lose trust in the ones leading it. 

But trust is essential for QA and testing practices. Without trust in testing, the foundation on which decisions get made is on quicksand. Without trust in testing, there are no absolutes, no guardrails to pinball you toward due north.

AI suffers the same trustworthiness issues as testing.

When you place too much faith in AI, you doom yourself to missing obvious issues. If you believe too little, you miss the benefit. AI is less intelligent than most people think. It makes stupid mistakes that humans wouldn't.

It's also genius. Its pattern-matching ability is exceptional, seeing things people would never see. AI can make associations across so many dimensions of a problem that the human brain can barely evaluate whether it is right or wrong.

AI in testing introduces the ability to do so much more with testing when we can trust it, and yet so much greater risk when it undermines our trust. 

And so the essential question is: How do you approach AI in testing? How do you keep your skepticism high enough to protect your organization from its belligerent assertions, while trusting it just enough to gain its awesome power?

The skills gap

Another factor holding back AI-based testing is the lack of skills among test and test automation engineers, according to the World Quality Report. "There is still some way to go in this regard," the report said; around one-third of respondents admit to a skills gap. 

Skills needed include data science skills and expertise in using and applying generic modeling tools to testing (generally) and to your testing domain, specifically. Also, test engineers will be required to understand some deep learning principles.

Where we go from here

While the utility of AI in testing is sizable, understanding its utility as supplemental to the work of testers is key. 

AI will not solve all your problems. It will not "do all the testing." It may even create a few new problems for you. But successful companies are finding the sweet spot; this is where, because teams know the current limits of AI in testing, testers' capabilities expand.

Keep learning

Read more articles about: App Dev & TestingTesting