Remove Cache Remove Exercise Remove Latency Remove Strategy
article thumbnail

Migrating Critical Traffic At Scale with No Downtime?—?Part 1

The Netflix TechBlog

This blog series will examine the tools, techniques, and strategies we have utilized to achieve this goal. In this testing strategy, we execute a copy (replay) of production traffic against a system’s existing and new versions to perform relevant validations. This approach has a handful of benefits.

Traffic 339
article thumbnail

Seamlessly Swapping the API backend of the Netflix Android app

The Netflix TechBlog

Over the course of this post, we will talk about our approach to this migration, the strategies that we employed, and the tools we built to support this. This allows the app to query a list of “paths” in each HTTP request, and get specially formatted JSON (jsonGraph) that we use to cache the data and hydrate the UI.

Latency 233
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Bring Your Own Cloud (BYOC) vs. Dedicated Hosting at ScaleGrid

Scalegrid

In this post, we compare ScaleGrid’s Bring Your Own Cloud (BYOC) plan vs. the standard Dedicated Hosting model to help you determine the best strategy for your MySQL, PostgreSQL, Redis™ and MongoDB® database deployment. Deploying your application and database on the same VPC also provides the lowest possible latency path. No problem.

Cloud 242
article thumbnail

Fixing a slow site iteratively

CSS - Tricks

With all of this in mind, I thought improving the speed of my own version of a slow site would be a fun exercise. In that spirit, what we’re looking at in this article is focused more on the incremental wins and less on providing an exhaustive list or checklist of performance strategies. Compressing, minifying and caching assets.

Cache 92
article thumbnail

Trade-offs under pressure: heuristics and observations of teams resolving internet service outages (Part II)

The Morning Paper

1:18pm a key observation was made that an API call to populate the homepage sidebar saw a huge jump in latency. The process tracing exercise included: Examning IRC transcripts from multiple channels. Members of the team begin diagnosing the issue using the #sysops and #warroom internal IRC channels.

article thumbnail

Taiji: managing global user traffic for large-scale Internet services at the edge

The Morning Paper

Sharing is caring caching. Taiji’s routing table is a materialized representation of how user traffic at various edge nodes ought to be distributed over available data centers to balance data center utilization and minimize latency. For example, balance utilisation across all data centers, or optimise for network latency.

Traffic 42
article thumbnail

A Decade of Dynamo: Powering the next wave of high-performance, internet-scale applications

All Things Distributed

Performant – DynamoDB consistently delivers single-digit millisecond latencies even as your traffic volume increases. DynamoDB automatically re-distributes your data to healthy servers to ensure there are always multiple replicas of your data without you needing to intervene. Auto Scaling is on by default for all new tables and indexes.

Internet 130