article thumbnail

Rebuilding Netflix Video Processing Pipeline with Microservices

The Netflix TechBlog

This introductory blog focuses on an overview of our journey. Future blogs will provide deeper dives into each service, sharing insights and lessons learned from this process. This architecture shift greatly reduced the processing latency and increased system resiliency.

article thumbnail

Migrating Critical Traffic At Scale with No Downtime?—?Part 1

The Netflix TechBlog

When undertaking system migrations, one of the main challenges is establishing confidence and seamlessly transitioning the traffic to the upgraded architecture without adversely impacting the customer experience. This blog series will examine the tools, techniques, and strategies we have utilized to achieve this goal.

Traffic 339
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Scalable Annotation Service?—?Marken

The Netflix TechBlog

The service should be able to serve real-time, aka UI, applications so CRUD and search operations should be achieved with low latency. Our service will be used by a lot of internal UI applications hence the latency for CRUD and search operations must be low. Teams should be able to define their data model for annotation.

article thumbnail

Supercomputing Predictions: Custom CPUs, CXL3.0, and Petalith Architectures

Adrian Cockcroft

Here’s some predictions I’m making: Jack Dongarra’s efforts to highlight the low efficiency of the HPCG benchmark as an issue will influence the next generation of supercomputer architectures to optimize for sparse matrix computations. Next generation architectures will use CXL3.0 Next generation architectures will use CXL3.0

article thumbnail

Consistent caching mechanism in Titus Gateway

The Netflix TechBlog

The original assumptions and architectural choices were no longer viable. This blog post presents how our current iteration of Titus deals with high API call volumes by scaling out horizontally. Overview The figure below depicts a simplified high-level architecture of a single Titus cluster (a.k.a

Cache 224
article thumbnail

Data ingestion pipeline with Operation Management

The Netflix TechBlog

These media focused machine learning algorithms as well as other teams generate a lot of data from the media files, which we described in our previous blog , are stored as annotations in Marken. But we cannot search or present low latency retrievals from files Etc. Marken Architecture Marken’s architecture diagram is as follows.

Media 264
article thumbnail

Uber’s Big Data Platform: 100+ Petabytes with Minute Latency

Uber Engineering

To accomplish this, Uber relies heavily on making data-driven decisions at every level, from forecasting rider demand during high traffic events to identifying and addressing bottlenecks … The post Uber’s Big Data Platform: 100+ Petabytes with Minute Latency appeared first on Uber Engineering Blog.

Big Data 109