article thumbnail

Implementing AWS well-architected pillars with automated workflows

Dynatrace

If you use AWS cloud services to build and run your applications, you may be familiar with the AWS Well-Architected framework. This is a set of best practices and guidelines that help you design and operate reliable, secure, efficient, cost-effective, and sustainable systems in the cloud.

AWS 256
article thumbnail

AWS serverless services: Exploring your options

Dynatrace

Amazon Web Services (AWS), offers a wide range of serverless solutions. To get a better understanding of AWS serverless, we’ll first explore the basics of serverless architectures, review AWS serverless offerings, and explore common use cases. AWS serverless offerings. Common use cases for AWS serverless services.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Supporting Diverse ML Systems at Netflix

The Netflix TechBlog

Compute: Titus Whereas open-source users of Metaflow rely on AWS Batch or Kubernetes as the compute backend , we rely on our centralized compute-platform, Titus. We have talked about the importance of a production-grade workflow orchestrator in the context of Metaflow when we released support for AWS Step Functions years ago.

Systems 226
article thumbnail

Netflix Cloud Packaging in the Terabyte Era

The Netflix TechBlog

Figure 1: A Simplified Video Processing Pipeline With this architecture, chunk encoding is very efficient and processed in distributed cloud computing instances. Since not all projects are terabytes projects, allocating the largest cloud storage to all packager instances is not an efficient use of cloud resources.

Cloud 237
article thumbnail

What is a data lakehouse? Combining data lakes and warehouses for the best of both worlds

Dynatrace

While data lakes and data warehousing architectures are commonly used modes for storing and analyzing data, a data lakehouse is an efficient third way to store and analyze data that unifies the two architectures while preserving the benefits of both. A data lakehouse, therefore, enables organizations to get the best of both worlds.

article thumbnail

Elasticsearch Indexing Strategy in Asset Management Platform (AMP)

The Netflix TechBlog

Elasticsearch recommends each shard to be under 65GB (AWS recommends them to be under 50GB), so we could create time based indices where each index holds somewhere between 16–20GB of data, giving some buffer for data growth. For every asset indexing request, we look at the cache to determine the corresponding time bucket index for the asset.

Strategy 258
article thumbnail

Remote Workstations for the Discerning Artists

The Netflix TechBlog

Below is a broad technical overview of how to go from an AWS instance to a Netflix Workstation. Instead, we created a service to take the most popular configurations and cache them. Where we can gather and analyze the usage data to create efficiencies and automation. Now that you know why, here is how we did it.