Remove Latency Remove Network Remove Open Source Remove Video
article thumbnail

Building Netflix’s Distributed Tracing Infrastructure

The Netflix TechBlog

Investigating a video streaming failure consists of inspecting all aspects of a member account. Reconstructing a streaming session was a tedious and time consuming process that involved tracing all interactions (requests) between the Netflix app, our Content Delivery Network (CDN), and backend microservices.

article thumbnail

Predictive CPU isolation of containers at Netflix

The Netflix TechBlog

Because microprocessors are so fast, computer architecture design has evolved towards adding various levels of caching between compute units and the main memory, in order to hide the latency of bringing the bits to the brains. This avoids thrashing caches too much for B and evens out the pressure on the L3 caches of the machine.

Cache 251
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

USENIX LISA2021 Computing Performance: On the Horizon

Brendan Gregg

AWS Graviton2); for memory with the arrival of DDR5 and High Bandwidth Memory (HBM) on-processor; for storage including new uses for 3D Xpoint as a 3D NAND accelerator; for networking with the rise of QUIC and eXpress Data Path (XDP); and so on. Ford, et al., “TCP

article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

The video is now on [YouTube]: The slides are [online] and as a [PDF]: first prev next last / permalink/zoom In Q&A I was asked about CXL (compute express link) which was fortunate as I had planned to cover it and then forgot, so the question let me talk about it (although Q&A is missing from the video). Ford, et al., “TCP

article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

The video is now on YouTube : The slides are online and as a PDF : first prev next last / permalink/zoom In Q&A I was asked about CXL (compute express link) which was fortunate as I had planned to cover it and then forgot, so the question let me talk about it (although Q&A is missing from the video). Ford, et al., “TCP

article thumbnail

USENIX LISA2021 Computing Performance: On the Horizon

Brendan Gregg

AWS Graviton2); for memory with the arrival of DDR5 and High Bandwidth Memory (HBM) on-processor; for storage including new uses for 3D Xpoint as a 3D NAND accelerator; for networking with the rise of QUIC and eXpress Data Path (XDP); and so on. Ford, et al., “TCP

article thumbnail

HTTP/3: Performance Improvements (Part 2)

Smashing Magazine

As we will see, QUIC and HTTP/3 indeed have great web performance potential, but mainly for users on slow networks. If your average visitor is on a fast cabled or cellular network, they probably won’t benefit from the new protocols all that much. Two-way latency is often called round-trip time (RTT). Congestion Control.