article thumbnail

Rebuilding Netflix Video Processing Pipeline with Microservices

The Netflix TechBlog

This introductory blog focuses on an overview of our journey. Future blogs will provide deeper dives into each service, sharing insights and lessons learned from this process. This architecture shift greatly reduced the processing latency and increased system resiliency. divide the input video into small chunks 2.

article thumbnail

Consistent caching mechanism in Titus Gateway

The Netflix TechBlog

In the time since it was first presented as an advanced Mesos framework, Titus has transparently evolved from being built on top of Mesos to Kubernetes, handling an ever-increasing volume of containers. This blog post presents how our current iteration of Titus deals with high API call volumes by scaling out horizontally.

Cache 224
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Jamstack CMS: The Past, The Present and The Future

Smashing Magazine

Jamstack CMS: The Past, The Present and The Future. Jamstack CMS: The Past, The Present and The Future. In the 2000s we had a showdown of two popular blog publishing platforms — MovableType in 2001 and WordPress in 2003. Editing a blog post in MovableType 2.0 Blog aware. Mike Neumegen.

Ecommerce 139
article thumbnail

Scalable Annotation Service?—?Marken

The Netflix TechBlog

The service should be able to serve real-time, aka UI, applications so CRUD and search operations should be achieved with low latency. Our service will be used by a lot of internal UI applications hence the latency for CRUD and search operations must be low. Search latency for the generic text queries are in milliseconds.

article thumbnail

Data ingestion pipeline with Operation Management

The Netflix TechBlog

These media focused machine learning algorithms as well as other teams generate a lot of data from the media files, which we described in our previous blog , are stored as annotations in Marken. But we cannot search or present low latency retrievals from files Etc. We refer the reader to our previous blog article for details.

Media 264
article thumbnail

Migrating Critical Traffic At Scale with No Downtime?—?Part 2

The Netflix TechBlog

Our previous blog post presented replay traffic testing — a crucial instrument in our toolkit that allows us to implement these transformations with precision and reliability. This blog post will delve into the techniques leveraged at Netflix to introduce these changes to production.

Traffic 279
article thumbnail

Crucial Redis Monitoring Metrics You Must Watch

Scalegrid

This blog post lists the important database metrics to monitor. Key Takeaways Critical performance indicators such as latency, CPU usage, memory utilization, hit rate, and number of connected clients/slaves/evictions must be monitored to maintain Redis’s high throughput and low latency capabilities.

Metrics 130