Reinventing Performance Testing: New Architectures

I am looking forward to share my thoughts on ‘Reinventing Performance Testing’ at the imPACt performance and capacity conference by CMG held on November 7-10, 2016 in La Jolla, CA. I decided to publish a few parts here to see if anything triggers a discussion.

It would be published as separate posts:
Introduction (a short teaser)
Cloud
Agile
Continuous Integration
-New Architectures (this post)
New Technologies

Cloud seriously impacts system architectures that has a lot of performance-related consequences.

First, we have a shift to centrally managed systems. ‘Software as a Service’ (SaaS) basically are centrally managed systems with multiple tenants/instances. Mitigating performance risks moves to SaaS vendors. From one side, it makes it easier to monitor and update / rollback systems that lowers performance-related risks. From another side, you get much more sophisticated systems when every issue potentially impacts a large number of customers, thus increasing performance-related risks.

To get full advantage of cloud, such cloud-specific features as auto-scaling should be implemented. Auto-scaling is often presented as a panacea for performance problems, but, even if it is properly implemented (which is, of course, better to be tested), it just assigns a price tag for performance. It will allocate resources automatically – but you need to pay for them. And the question is how effective is the system – any performance improvement results in immediate savings.

Another major trend is using multiple third-party components and services, which may be not easy to properly incorporate into testing. The answer to this challenge is service virtualization, which allows simulating real services during testing without actual access.

Cloud and virtualization triggered appearance dynamic, auto-scaling architectures, which significantly impact getting and analyzing feedback. System’s configuration is not given anymore and often can’t be easily mapped to hardware. As already mentioned, performance testing is rather a performance engineering process (with tuning, optimization, troubleshooting and fixing multi-user issues) eventually bringing the system to the proper state rather than just testing. And the main feedback you get during your testing is the results of monitoring your system (such as response times, errors, and resources your system consumes).

The dynamic architectures represent a major challenge for both monitoring and analysis. It makes it difficult to analyze results as the underlying system is changing all the time. Even before it often went beyond comparing results against goals – for example, when the system under test didn’t match the production system exactly or when tests didn’t represent the full projected load. It becomes even a much more serious challenge when the configuration is dynamic – for both monitoring and analysis. Another challenge is when tests are a part of Continuous Integration, where all monitoring and analysis should be done automatically. The more complex the system, the more important feedback and analysis become. A possibility to analyze monitoring results and test results together help a lot.

Traditionally monitoring was on the system level. Due to virtualization system-level monitoring doesn’t help much anymore and may be misleading – so getting information from application and database servers becomes very important. Many load testing tools recently announced integration with Application Performance Management / Monitoring (APM) tools, such as AppDynamics, New Relics, or Dynatrace. If using such tools is an option, it definitely opens new opportunities to see what is going on inside the system under load and what needs to be optimized. One thing to keep in mind is that older APM tools and profilers may be not appropriate to use under load due to the high overheads they introduce.

With really dynamic architectures, we have a great challenge here to discover configuration automatically, collect all needed information, and then properly map the collected information and results onto changing configuration and system components in a way to highlight existing and potential issues, and, potentially, make automatic adjustments to avoid them. It would require very sophisticated algorithms (including machine learning) and potentially creates real Application Performance Management (the word “Management” today is rather a promise than the reality).

In additions to new challenges in monitoring and analysis, virtualized and dynamic architectures open a new application for performance testing: to test if the system is dynamically changing under load in a way it is supposed to change.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *