article thumbnail

Latency vs. Throughput: Navigating the Digital Highway

VoltDB

In this fast-paced ecosystem, two vital elements determine the efficiency of this traffic: latency and throughput. LATENCY: THE WAITING GAME Latency is like the time you spend waiting in line at your local coffee shop. All these moments combined represent latency – the time it takes for your order to reach your hands.

Latency 52
article thumbnail

Practical API Design at Netflix, Part 1: Using Protobuf FieldMask

The Netflix TechBlog

Remote calls are never free; they impose extra latency, increase probability of an error, and consume network bandwidth. How can we achieve a similar functionality when designing our gRPC APIs? This generated code contains classes for defined messages, together with message and field descriptors.

Design 245
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Seeing through hardware counters: a journey to threefold performance increase

The Netflix TechBlog

A quick canary test was free of errors and showed lower latency, which is expected given that our standard canary setup routes an equal amount of traffic to both the baseline running on 4xl and the canary on 12xl. What’s worse, average latency degraded by more than 50%, with both CPU and latency patterns becoming more “choppy.”

Hardware 363
article thumbnail

The Three Cs: Concatenate, Compress, Cache

CSS Wizardry

Plotted on the same horizontal axis of 1.6s, the waterfalls speak for themselves: 201ms of cumulative latency; 109ms of cumulative download. 4,362ms of cumulative latency; 240ms of cumulative download. When we talk about downloading files, we—generally speaking—have two things to consider: latency and bandwidth. It gets worse.

Cache 291
article thumbnail

Crucial Redis Monitoring Metrics You Must Watch

Scalegrid

Key Takeaways Critical performance indicators such as latency, CPU usage, memory utilization, hit rate, and number of connected clients/slaves/evictions must be monitored to maintain Redis’s high throughput and low latency capabilities. <code> 127.0.0.1:6379> cmdstat_append:calls=797,usec=4480,usec_per_call=5.62

Metrics 130
article thumbnail

Rebuilding Netflix Video Processing Pipeline with Microservices

The Netflix TechBlog

This architecture shift greatly reduced the processing latency and increased system resiliency. We expanded pipeline support to serve our studio/content-development use cases, which had different latency and resiliency requirements as compared to the traditional streaming use case. This testing stage took about two weeks.

article thumbnail

SLOG: serializable, low-latency, geo-replicated transactions

The Morning Paper

SLOG: serializable, low-latency, geo-replicated transactions Ren et al., Strict serializability reduces application code complexity and bugs, since it behaves like a system that is running on a single machine processing transactions sequentially. That’s where SLOG (Serializable LOw-latency, Geo-replicated transactions) comes in.

Latency 70