article thumbnail

Timestone: Netflix’s High-Throughput, Low-Latency Priority Queueing System with Built-in Support…

The Netflix TechBlog

Timestone: Netflix’s High-Throughput, Low-Latency Priority Queueing System with Built-in Support for Non-Parallelizable Workloads by Kostas Christidis Introduction Timestone is a high-throughput, low-latency priority queueing system we built in-house to support the needs of Cosmos , our media encoding platform. Over the past 2.5

Latency 212
article thumbnail

Stuff The Internet Says On Scalability For December 7th, 2018

High Scalability

It's HighScalability time: This is your 1500ms latency in real life situations - pic.twitter.com/guot8khIPX. — Ivo Mägi (@ivomagi) November 27, 2018. Do you like this sort of Stuff? Please support me on Patreon. I'd really appreciate it. Know anyone looking for a simple book explaining the cloud?

Internet 163
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Stuff The Internet Says On Scalability For November 23rd, 2018

High Scalability

Delay is Not an Option: Low Latency Routing in Space , Murat ). It's HighScalability time: Curious how SpaceX's satellite constellation works? Here's some fancy FCC reverse engineering magic. Do you like this sort of Stuff? Please support me on Patreon. I'd really appreciate it. Know anyone looking for a simple book explaining the cloud?

Internet 174
article thumbnail

USENIX LISA2021 Computing Performance: On the Horizon

Brendan Gregg

## References I've reproduced the talk references below, so you can click on links: - [Gregg 08] Brendan Gregg, “ZFS L2ARC,” [link] Jul 2008 - [Gregg 10] Brendan Gregg, “Visualizations for Performance Analysis (and More),” [link] 2010 - [Greenberg 11] Marc Greenberg, “DDR4: Double the speed, double the latency? Ford, et al., “TCP

article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Ford, et al., “TCP

article thumbnail

bpftrace (DTrace 2.0) for Linux 2018

Brendan Gregg

Screenshot: tracing read latency for PID 181: # bpftrace -e 'kprobe:vfs_read /pid == 30153/ { @start[tid] = nsecs; } kretprobe:vfs_read /@start[tid]/ { @ns = hist(nsecs - @start[tid]); delete(@start[tid]); }'. Here's key differences as of August 2018: Type DTrace bpftrace. It's shaping up to be a DTrace version 2.0: I wrote seeksize.d

C++ 110
article thumbnail

Stuff The Internet Says On Scalability For August 17th, 2018

High Scalability

12 million requests / hour with sub-second latency, ~300GB of throughput / day. @coryodaniel : Rewrote an #AWS APIGateway & #lambda service that was costing us about $16000 / month in #elixir. Its running in 3 nodes that cost us about $150 / month. myelixirstatus !#Serverless.No Serverless.No

Internet 105