Header background

How to solve the challenges of multicloud AWS, Azure and GCP observability

Learn how AI-powered, full-stack observability into multicloud environments enables DevOps teams to tame multicloud complexity so you can deliver better software faster.

Consumer demand for digital services has soared in the past year, steeply accelerating a trend that was already well underway. Behind the scenes working to meet this demand are DevOps teams, spinning up multicloud IT environments to accelerate digital transformation so their organizations can sustain growth at this new pace.

Versatile, feature-rich cloud computing environments such as AWS, Microsoft Azure, and GCP have been a game-changer. Although these environments use fewer resources, they enable DevOps teams to deliver greater capabilities on a wider scale. While these environments can bring great benefits, they also introduce new challenges. Keeping track of performance, response time, and efficiency can be cumbersome, especially when teams use a multicloud strategy that spans cloud environments and on-premises systems.

To learn how to solve these challenges, we sat down with Dynatrace technical project managers Michal Franczak and Michal Nalezinski at Perform 2021. They explained how AI-powered, full-stack observability into multicloud environments enables DevOps teams to tame multicloud complexity so they can deliver better software faster.

Apps need to work on the same data sets

Cloud computing environments like AWS, Azure, and GCP offer a wide array of computing capabilities and capacity. Without the overhead of establishing and maintaining on-premises servers, these systems save resources. But not all cloud environments are ideal for all use cases.

Michal Nalezinski, whose role at Dynatrace focuses on monitoring using public APIs, cited a Flexera State of the Cloud study. Their research found that 93% of companies have a multicloud strategy so they can leverage the best qualities of each cloud provider for different situations. “The most common scenario is to have different apps deployed on different clouds,” Nalezinski says. “This approach enables teams to take advantage of the features that best match their use cases.”

Organizations also use multicloud environments to move workloads between public and private clouds and on-premises systems with little preparation. The benefit is scalability. The Flexera study noted that on average, organizations use at least two public and private cloud computing solutions and are experimenting with at least one more. The challenge? Multicloud observability.

“As soon as you have multiclouds, you need data integration,” Nalezinski said. “Apps need to work on the same data sets. Teams need data to work seamlessly across multiple cloud environments.”

Nalezinski argued that teams should be able to focus on delivering the best features in the most appropriate environment, not worrying about integration. “Customers should have the flexibility to choose any cloud vendor they want and however many they want, even on-premises. They need a platform-agnostic way to monitor and manage performance across all of them seamlessly. Hybrid environments shouldn’t have to require hybrid monitoring tools.”

A single source of truth—one platform to tame multicloud complexity

To illustrate his point, Nalezinski gave the example of the GCP suite of services. “Dynatrace OneAgent already supported the GCP basics–GCE, GAE, and GKE–but now supports all the other GCP monitoring APIs. Dynatrace captures all the relevant data and analyzes it with AI. This instantly identifies the root cause of any problem with the application or its infrastructure, anywhere it’s hosted or transacting.” He went on to explain why this is unique.

“We believe ingesting data into Dynatrace is only the first step in the cloud monitoring journey,” he said. “That’s why we provide Cloud Monitoring, a package of capabilities for the cloud services we’re supporting. It includes metrics, dashboards, alerts, events, logs, and cross-environment traces. It’s all aimed at quick-starting multicloud monitoring with zero or minimal user input.”

Nalezinski demonstrated how easy it is to add a new service to Dynatrace. In two clicks, he added Azure App Services Plan. Dynatrace automatically selected the six most important metrics: CPU and memory percentage, disk and HTTP queue length, and data in/out. “You really don’t have to think about it,” Nalezinski said, although with Dynatrace, it’s easy to customize every service. Data gathered from multicloud environments is also available to visualize in dashboards.

Turning terabytes of logs into answers to real problems—instantly

Michal Franczak pointed out that Dynatrace’s multicloud observability extends far beyond monitoring, dashboards, and alerts. “Teams need more than just performance metrics,” says Franczak. His role at Dynatrace focuses on using logs to enhance workload observability.

Cloud environments can generate terabytes of logs for many services in all different formats. “Often, teams don’t have access to the underlying virtual machine, so log data can be priceless for troubleshooting,” Franczak said. “In such cases, it’s critical to filter data and show logs in the right context.”

Dynatrace Log Monitoring provides access to the raw log data and extracts the most important metadata. It connects log lines to real problems and the entities that connect to them in the environment. “Logs can extend your multicloud observability beyond simple analysis,” Franczak said. “Davis AI automates the entire root-cause detection process.”

Franczak demonstrated how Davis discovered a response time degradation in an e-commerce app. Davis pinpointed the problem to a configuration change event registered in CloudTrail. The DS database had been reconfigured, which impacted 400 users. “With the event data, you can know who did the change and what it was about,” Franczak explained.

Another scenario involves BigQuery, a GCP analytics service. To help DevOps teams easily find a particular activity, Franczak demonstrated how to use the log viewer and the advanced query feature to set filters that quickly isolate specific events.

Paving the future for real-time cloud monitoring

Franczak and Nalezinski concluded with a look into how Dynatrace is making multicloud monitoring for the future even better. “To fulfill the market needs for multicloud observability, Dynatrace is partnering with the major cloud providers to enable data streaming for a real-time cloud monitoring experience,” Nalezinski said.

“Dynatrace is also an active member of the OpenTelemetry community,” Franczak added. “OpenTelemetry is a collection of tools you can use to instrument applications that export metrics, logs, and traces for analysis.”

Together, these initiatives help Dynatrace and Davis AI perform automated and intelligent root-cause analysis in multicloud and hybrid environments.