Header background

The future of synthetic testing is in the cloud

While not a new concept, the term “cloud” creates a lot of confusion because it means different things to different audiences. I’ve been speaking to customers over the last few months about our new cloud architecture for Synthetic testing locations and their confusion is clear.

Hearing things like “Will you have more than one geographic location?” or “How will performance be accurate if the machine is not physical?” and “You guys are just doing this to save money!” I get the confusion. Cloud can be confusing, even to technology folks and there’s a lot of misinformation out there.

So, to understand how we got here, let’s start with a brief history lesson.

Historically, there have been several challenges with synthetic and I think it’s helpful to demonstrate these and why we’ve moved to cloud:

  1. Things were slow. When we wanted to add a location, we had to ship hardware and get someone to install that hardware in a rack with power and network. Sound easy? Try doing that in India. This could take 30+ days in easy locations, but months in others.
  2. Hardware was outdated. With the constant change in memory, CPU, resolution and other factors – it was hard to keep up. Each time you needed an upgrade you either needed an entirely new machine or at minimum remote folks to upgrade processors and memory. Try doing that across 80+ locations with 15+ machines per location, you can imagine the difficulty.
  3. Failures were happening too often. Fixed hardware is a single point of failure – even when we had redundant machines. When a data center had issues, or a box has issues, our customers had issues. And the last thing you want to do with synthetic is introduce false positives (the bane of all synthetic testing) into the system, and yet this was happening too often.
  4. Scaling up – or down – was a real challenge. I remember when we would sign a new customer and they wanted to add hundreds, or thousands of tests, we had to slow them down so we had time to add more hardware. Sometimes, if we couldn’t slow them down, other customers’ test might be skewed as physical machines were at their limit.

Each of these factors impacted data quality, time to market, and slowed down our ability to innovate efficiently for our customers. If you’re spending the majority of your time on data center and hardware logistics, it doesn’t leave a lot of time to build better features, keep browsers updated, or satisfy your customers.

Cloud effectively solves each of these major issues.

With Cloud, we are leveraging the largest cloud providers’ locations, including AWS, Azure, Alibaba and Google coming very soon. This gives us geographic coverage in up to 60 distinct global locations as of today and going up to 80. So yes, we will have real physical locations.

Using cloud providers allows us to scale locations up and down as required, eliminating outdated hardware or configurations, and limit the number of failures happening. What this means for our customers is simply better data, that is more reliable and has less noise, enabling them to make better decisions using a more robust underlying architecture.

Imagine a self-scaling, self-healing global synthetic platform that can adjust in real-time to both our customers’ needs and conditions it detects in real time like failures or demand spikes. Imagine if we didn’t have to waste time shipping hardware around the world or planning the next CPU upgrade. Imagine if we could spin up a new location – public or private – in less than an hour.

Well you don’t have to imagine, because we built it.

Fueling innovation

For us, cloud is not a cost savings play. It’s about fueling innovation.

Our end goal is to:

  • Provide the cleanest Synthetic data possible to our customers.
  • Keep hardware and browsers updated at all times.
  • Scale as we need to, to innovate.

Ultimately, this will allow us to add new features and functionality, including in browser JS error detection, full movie quality session playback to Synthetic and do things that no one else is doing.

The future is here

You must be wondering, are there any downsides of cloud?

The answer is yes, but there are always going to be ‘downsides’ to every technology that’s out there. However, the only main ‘downside’ with cloud that I can think of is the lack of ISP visibility. But I’m not sure that’s a real issue in 2019.

With most traffic coming from mobile for many of our customers, and the complexity of the “last mile,” do we really need to be troubleshooting core backbone ISP routing paths? Some customers do but most do not.

A lot of customers are still trying to use synthetic to capture end user experience, and from there stem concerns about going to cloud because it’s not “true” user experience. Customers don’t live in the cloud any more than they live in a fixed data center connected to a backbone ISP. Instead, they’re at home on laptops and wandering around cities on mobile devices. Synthetic was a good proxy for user experience before Real User Monitoring (RUM), but in 2019 and beyond, RUM is the is the only way to truly understand end user.

Change can be hard, especially for an old Synthetic guy like me. But this is the kind of change I can get behind.

Synthetic monitoring is still a critical pillar to a holistic monitoring strategy; it’s an excellent early warning detection system, the best way to obtain baseline’s and SLA’s, and ensures monitoring of your critical transactions even when you don’t have real users on your site. For these key use cases, what matters is not cloud versus “physical” but instead architecture that provides the most stable, clean and reliable monitoring data. Clearly that is the cloud.