Remove Exercise Remove Infrastructure Remove Latency Remove Servers
article thumbnail

Seamlessly Swapping the API backend of the Netflix Android app

The Netflix TechBlog

The big difference from the monolith, though, is that this is now a standalone service deployed as a separate “application” (service) in our cloud infrastructure. Functional Testing Functional testing was the most straightforward of them all: a set of tests alongside each path exercised it against the old and new endpoints.

Latency 233
article thumbnail

Bring Your Own Cloud (BYOC) vs. Dedicated Hosting at ScaleGrid

Scalegrid

Are you comfortable setting up your own cloud infrastructure through AWS or Azure? Amazon Virtual Private Clouds (VPC) and Azure Virtual Networks (VNET) are private, isolated sections of the cloud infrastructure where you can launch resources. Do you want to deploy in an AWS VPC or Azure VNET? Expert Tip. Security Groups. No problem.

Cloud 242
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Automating chaos experiments in production

The Morning Paper

In this type of environment, there are many potential sources of failure, stemming from the infrastructure itself (e.g. Two failure modes we focus on are a service becoming slower (increase in response latency) or a service failing outright (returning errors). RPCs at Netflix are wrapped as Hystrix commands.

Latency 77
article thumbnail

Transforming enterprise integration with reactive streams

O'Reilly Software

Although the ideas of reactive and streaming are nowhere near new, and keeping in mind that mere novelty doesn’t imply greatness, it is safe to say they have proven themselves and matured enough to see many programming languages, platforms, and infrastructure products embrace them fully. HTTP, TCP, FTP, MQTT, JMS), databases (i.e.,

article thumbnail

Scaling Amazon ElastiCache for Redis with Online Cluster Resizing

All Things Distributed

Redis's microsecond latency has made it a de facto choice for caching. Four years ago, as part of our AWS fast data journey, we introduced Amazon ElastiCache for Redis , a fully managed, in-memory data store that operates at microsecond latency. However, the slots must be moved manually on the server side.

Games 112
article thumbnail

A Decade of Dynamo: Powering the next wave of high-performance, internet-scale applications

All Things Distributed

Our straining database infrastructure on Oracle led us to evaluate if we could develop a purpose-built database that would support our business needs for the long term. percent availability in the event of a server, a rack of servers, or an Availability Zone failure. Auto Scaling is on by default for all new tables and indexes.

Internet 128
article thumbnail

Amazon EC2 Cluster GPU Instances - All Things Distributed

All Things Distributed

For example, the most fundamental abstraction trade-off has always been latency versus throughput. Modern CPUs strongly favor lower latency of operations with clock cycles in the nanoseconds and we have built general purpose software architectures that can exploit these low latencies very well. Where to go from here?

AWS 136