Micro Focus is now part of OpenText. Learn more >

You are here

You are here

10 performance engineering trends to watch

public://pictures/Matthew-Heusser-Managing-Consultant-Excelon-Development.jpg
Matthew Heusser Managing Consultant, Excelon Development
 

As teams try to adopt a faster, more iterative, incremental style of software delivery using the latest performance engineering techniques, the state of the art can hold them back.

Emerging trends promise to pull performance testing into DevOps by enabling more responsive systems, in less time, for less risk and impact. These trends should play into your decision process as you assemble your resources and build your performance engineering tool set.

Here are 10 key trends to watch.

1. Open architectures

As cloud computing becomes expected, the "language" of performance testing is moving away from the browser and toward TCP/IP and other Internet protocols. Web services and native mobile applications only accelerate this trend, since the traffic generator may not be a web browser, noted Leandro Melendez, consulting performance tester at Qualitest.

"The days of just recording load tests through a browser and playing it back are coming to an end, if not already over."
Leandro Melendez

That means making the parts work together and measuring their performance in isolation, from load to monitoring to debugging, is increasingly critical. One key element of that open architecture will be the cloud.

2. Cloud-native tools

Some monitoring tools exist as sidecars in cloud management tools such as Kubernetes, where they monitor and report traffic. Blue-green deploys are a popular technique where you create an entirely new copy of the production environment in a cluster, called the "green" line. Once the new servers exist, you reroute the traffic to them, and they become the "blue" or "old" line until the next deploy.

Blue-green deploys allow you to create a copy of a production server, test against it, and then reserve a copy that you never promote to production.

The cloud can also be useful to create targeted-source load test. A test run can evaluate, for instance, how long it takes to look at a website from another country or continent, as a user.

3. Self-service

The way people in security, DevOps, and programming roles look at performance differs radically. The emerging generation of tools help because they are not only customized by role, but in some cases they also allow technical specialists to stay within their own tool sets.

IT operations engineers who can see performance data in the same place that they do the work will be more likely to refer to that data and take corrective action. Likewise, programmers who can perform performance work within their integrated development environment (IDE) have a much better chance of keeping performance engineering work in line with new development, preventing the bottleneck problem.

4. SaaS-based tools

The capability for a tester to set up and run a test at cloud scale within minutes is approaching mainstream. That is only possible because of a combination of trends: self-service, open architecture, cloud-based testing, and SaaS.

Most older tools remain desktop-based and require significant setup and configuration. The emerging tools can do it all, and with just a few clicks. Configuration and setup can be saved in the cloud—but only with a high level of interoperability between tools.

5. Evolving requirements

In classic app testing, you often had to guess at what the software's use would be, create requirements and service-level agreements (SLAs) based on those guesses, and perform testing to those requirements. In contrast, DevOps-oriented shops see performance requirements as a conversation that changes over time.

Even traditional requirements are becoming more driven by complex use cases. "The experience of a customer on a mobile phone in a 3G country may be very different" than a laptop user 100 miles from the data center, said Vicky Giavelli, director of product management for performance engineering at Micro Focus. "Yet both expect the system to work."

Performance engineering will need to monitor systems, find problems, and solve them before they become serious enough to have a significant impact on customer retention or sales.

6. Synthetic transactions

Monitoring production tells how long requests live on-server, but it doesn't show the customer experience. Synthetic transactions simulate a real user, in production, on a loop, all the time. A synthetic account for an e-commerce site might log in, search for a product, add it to a cart, take it out of the cart, and log out.

More complex transactions can simulate actual orders and track performance end to end; just be careful not to fulfill those orders or charge real credit cards.

"Synthetic transactions can be critical for finding production problems quickly, as a real user may just leave and not report the issue."
—Leandro Melendez

Tracking the actual user experience for an operation helps companies find bottlenecks, delays, and errors as they happen. Synthetic transactions can be critical for finding production problems quickly, since a real user may just leave and not report the issue, Qualitest's Melendez said.

7. Shared systems data

It's common to use dashboards to monitor performance. However, that data is siloed, isolated from the user experience. The actual time it takes a user to see things on the screen appears in a different dashboard than does system-to-system network performance, which in turn is most likely separate from internal metrics for such things as CPU, memory, and disk.

Rebecca Clinard, a performance engineering solution consultant at New Relic, suggests taking the output from the performance test and pushing those metrics into a monitoring tool used for both test and production. This data can include logs, APM analysis, front-end data, microservice performance, database performance, and so on, to create a "shared analysis" with drill-down capability.

This can prevent painful reruns, shorten debugging time, and eliminate guess-change-retest as an improvement method. Qualitest's Melendez suggests publishing the dashboard to the whole team, or any interested employee, via a web page.

8. Testing in production

Testing in production exposes a small subset of the user population to software before the general population has access to it. Goranka Bjedov, a former capacity engineer at Facebook, used this approach to test performance by sending a large subset of users to a specific cluster.

In her report at the workshop on performance and reliability, Bjedov suggested that New Zealand was a good testbed; it has over a million users, and problems there could be found and fixed overnight, minimizing or avoiding North American news coverage.

Software exists to allow so-called canary deploys for mobile applications, while feature flags are a common way to test laptop software in production. The features can be changed in a database or file, allowing configuration changes without a recompile or "push."

Some teams even perform continuous delivery, pushing every code change to production if it passes automated tests—but the new code will only run for select internal employees, to be changed by feature flags. Other strategies for testing in production include A/B split testing, incremental rollouts, and blue-green deploys.

9. Chaos engineering

Originally designed in-house by Netflix but now open source, Chaos Monkey is a tool that ensures high availability in production by pulling down services randomly and seeing what breaks. This allows teams to build true redundancy into their systems.

The idea is to build resilience by eliminating single points of failure. The term chaos engineering comes from the idea of injecting chaos into the system and reporting problems, allowing engineering to improve the system before a real, unreported failure appears.

 

10. Machine learning, AI, and sentiment analysis

While the application of machine learning to log files can predict use patterns and generate more accurate loads, sentiment analysis is a newer, generally untapped innovation. It allows you to examine customer tickets and feedback to understand user perception.

The perception ratings come from plain text, analyzed by artificial intelligence for sentiment, then assigned a numerical score. Sentiment analysis tells you what users perceive as too slow, so you set the SLA exactly where it should be and don't waste time fixing what doesn't need to be fixed.

Trend setting … or ghost hunting?

These performance engineering innovations may not all have a high rate of adoption yet, but each has been proven to work in specific contexts. Your goal should be to fit the technology to your problems.

Of the trends here, sentiment analysis and AI for performance engineering have the most potential, but are also the furthest from full maturity. Testing in production and unified monitoring are probably the easiest to implement, while self-service, cloud testing, and open architecture are the most powerful when used in combination.

Keep learning

Read more articles about: App Dev & TestingTesting