Remove Cache Remove Exercise Remove Latency Remove Servers
article thumbnail

Migrating Critical Traffic At Scale with No Downtime?—?Part 1

The Netflix TechBlog

Replay Traffic Testing Replay traffic refers to production traffic that is cloned and forked over to a different path in the service call graph, allowing us to exercise new/updated systems in a manner that simulates actual production conditions. This approach has a handful of benefits.

Traffic 339
article thumbnail

Seamlessly Swapping the API backend of the Netflix Android app

The Netflix TechBlog

This allows the app to query a list of “paths” in each HTTP request, and get specially formatted JSON (jsonGraph) that we use to cache the data and hydrate the UI. Functional Testing Functional testing was the most straightforward of them all: a set of tests alongside each path exercised it against the old and new endpoints.

Latency 233
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Bring Your Own Cloud (BYOC) vs. Dedicated Hosting at ScaleGrid

Scalegrid

Deploying your application and database on the same VPC also provides the lowest possible latency path. This becomes really important for cache solutions like Redis™. AWS Security Groups and Azure Network Security Groups allow you to lock down access to your servers through advanced virtual firewalls. Security Groups.

Cloud 242
article thumbnail

Fixing a slow site iteratively

CSS - Tricks

With all of this in mind, I thought improving the speed of my own version of a slow site would be a fun exercise. Redirects are often pretty light in terms of the latency that they add to a website, but they are an easy first thing to check, and they can generally be removed with little effort. Improvement #2: The Critical Render Path.

Cache 92
article thumbnail

Scaling Amazon ElastiCache for Redis with Online Cluster Resizing

All Things Distributed

Redis's microsecond latency has made it a de facto choice for caching. Four years ago, as part of our AWS fast data journey, we introduced Amazon ElastiCache for Redis , a fully managed, in-memory data store that operates at microsecond latency. However, the slots must be moved manually on the server side.

Games 113
article thumbnail

Evaluating the Evaluation: A Benchmarking Checklist

Brendan Gregg

sounds like a homework exercise of purely academic value. In some cases, a benchmark may appear to exceed network bandwidth because it returns from a local memory cache instead of the remote target. Once, during a proof of concept, a client reported that latency was unacceptably high for the benchmark: over one second for each request!

article thumbnail

Evaluating the Evaluation: A Benchmarking Checklist

Brendan Gregg

sounds like a homework exercise of purely academic value. In some cases, a benchmark may appear to exceed network bandwidth because it returns from a local memory cache instead of the remote target. Once, during a proof of concept, a client reported that latency was unacceptably high for the benchmark: over one second for each request!