Remove Cache Remove Design Remove Latency Remove Video
article thumbnail

Designing Instagram

High Scalability

Design a photo-sharing platform similar to Instagram where users can upload their photos and share it with their followers. Generating machine learning based personalized recommendations to discover new people, photos, videos, and stories relevant one’s interest. High Level Design. Component Design. API Design.

Design 334
article thumbnail

Optimizing Video Streaming CDN Architecture for Cost Reduction and Enhanced Streaming Performance

IO River

Now, with viewers all over the world expecting flawless and high-definition streaming, video providers have their work cut out for them. What Comprises Video Streaming - Traffic CharacteristicsWith the emphasis on a high-quality streaming experience, the optimization starts from the very core. Those days are long gone.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Optimizing Video Streaming CDN Architecture for Cost Reduction and Enhanced Streaming Performance

IO River

Now, with viewers all over the world expecting flawless and high-definition streaming, video providers have their work cut out for them. ‍What Comprises Video Streaming - Traffic CharacteristicsWith the emphasis on a high-quality streaming experience, the optimization starts from the very core. Those days are long gone.

article thumbnail

Seamlessly Swapping the API backend of the Netflix Android app

The Netflix TechBlog

This allows the app to query a list of “paths” in each HTTP request, and get specially formatted JSON (jsonGraph) that we use to cache the data and hydrate the UI. In the snippet above, we’re accessing the detail key for the video object with id 80154610. Instead, it is part of a different path : [videos, <id>, similars].

Latency 233
article thumbnail

Predictive CPU isolation of containers at Netflix

The Netflix TechBlog

Because microprocessors are so fast, computer architecture design has evolved towards adding various levels of caching between compute units and the main memory, in order to hide the latency of bringing the bits to the brains. This avoids thrashing caches too much for B and evens out the pressure on the L3 caches of the machine.

Cache 251
article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

The video is now on [YouTube]: The slides are [online] and as a [PDF]: first prev next last / permalink/zoom In Q&A I was asked about CXL (compute express link) which was fortunate as I had planned to cover it and then forgot, so the question let me talk about it (although Q&A is missing from the video). Ford, et al., “TCP

article thumbnail

The Fastest Google Fonts

CSS Wizardry

It’s widely accepted that self-hosted fonts are the fastest option: same origin means reduced network negotiation, predictable URLs mean we can preload , self-hosted means we can set our own cache-control. On a high-latency connection, this spells bad news. Put another-other way, this file is latency-bound, not bandwidth-bound.

Google 364