article thumbnail

So many bad takes?—?What is there to learn from the Prime Video microservices to monolith story

Adrian Cockcroft

I don’t advocate “Serverless Only”, and I recommended that if you need sustained high traffic, low latency and higher efficiency, then you should re-implement your rapid prototype as a continuously running autoscaled container, as part of a larger serverless event driven architecture, which is what they did.

article thumbnail

DevOps automation: From event-driven automation to answer-driven automation [with causal AI]

Dynatrace

In the world of DevOps and SRE, DevOps automation answers the undeniable need for efficiency and scalability. This evolution in automation, referred to as answer-driven automation, empowers teams to address complex issues in real time, optimize workflows, and enhance overall operational efficiency.

DevOps 225
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Rebuilding Netflix Video Processing Pipeline with Microservices

The Netflix TechBlog

The Netflix video processing pipeline went live with the launch of our streaming service in 2007. This architecture shift greatly reduced the processing latency and increased system resiliency. The service also provides options that allow fine-tuning latency, throughput, etc., divide the input video into small chunks 2.

article thumbnail

Progress Delayed Is Progress Denied

Alex Russell

Efficiently enables new styles of drawing content on the web , removing many hard tradeoffs between visual richness , accessibility, and performance. Since 2007, support for these features has barely improved. For heavily latency-sensitive use-cases like WebXR, this is a critical component in delivering a good experience.

Media 145
article thumbnail

The Netflix Cosmos Platform

The Netflix TechBlog

It supports both high throughput services that consume hundreds of thousands of CPUs at a time, and latency-sensitive workloads where humans are waiting for the results of a computation. The first generation of this system went live with the streaming launch in 2007. Warm capacity. End-users can request compute resources (e.g.