article thumbnail

Migrating Critical Traffic At Scale with No Downtime?—?Part 1

The Netflix TechBlog

Replay Traffic Testing Replay traffic refers to production traffic that is cloned and forked over to a different path in the service call graph, allowing us to exercise new/updated systems in a manner that simulates actual production conditions. It helps expose memory leaks, deadlocks, caching issues, and other system issues.

Traffic 339
article thumbnail

Seamlessly Swapping the API backend of the Netflix Android app

The Netflix TechBlog

This allows the app to query a list of “paths” in each HTTP request, and get specially formatted JSON (jsonGraph) that we use to cache the data and hydrate the UI. Functional Testing Functional testing was the most straightforward of them all: a set of tests alongside each path exercised it against the old and new endpoints.

Latency 233
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Bring Your Own Cloud (BYOC) vs. Dedicated Hosting at ScaleGrid

Scalegrid

Deploying your application and database on the same VPC also provides the lowest possible latency path. This becomes really important for cache solutions like Redis™. At ScaleGrid we recommend you deploy your clusters on private VPC subnets so that your database is not routable from the internet. No problem.

Cloud 242
article thumbnail

Fixing a slow site iteratively

CSS - Tricks

With all of this in mind, I thought improving the speed of my own version of a slow site would be a fun exercise. Redirects are often pretty light in terms of the latency that they add to a website, but they are an easy first thing to check, and they can generally be removed with little effort. Improvement #2: The Critical Render Path.

Cache 92
article thumbnail

Evaluating the Evaluation: A Benchmarking Checklist

Brendan Gregg

sounds like a homework exercise of purely academic value. In some cases, a benchmark may appear to exceed network bandwidth because it returns from a local memory cache instead of the remote target. Once, during a proof of concept, a client reported that latency was unacceptably high for the benchmark: over one second for each request!

article thumbnail

Taiji: managing global user traffic for large-scale Internet services at the edge

The Morning Paper

Sharing is caring caching. Taiji’s routing table is a materialized representation of how user traffic at various edge nodes ought to be distributed over available data centers to balance data center utilization and minimize latency. For example, balance utilisation across all data centers, or optimise for network latency.

Traffic 42
article thumbnail

Evaluating the Evaluation: A Benchmarking Checklist

Brendan Gregg

sounds like a homework exercise of purely academic value. In some cases, a benchmark may appear to exceed network bandwidth because it returns from a local memory cache instead of the remote target. Once, during a proof of concept, a client reported that latency was unacceptably high for the benchmark: over one second for each request!