article thumbnail

Bandwidth or Latency: When to Optimise for Which

CSS Wizardry

When it comes to network performance, there are two main limiting factors that will slow you down: bandwidth and latency. Latency is defined as…. Where bandwidth deals with capacity, latency is more about speed of transfer 2. and reduction in latency. and reduction in latency. Bandwidth is defined as….

Latency 133
article thumbnail

The Three Cs: Concatenate, Compress, Cache

CSS Wizardry

Plotted on the same horizontal axis of 1.6s, the waterfalls speak for themselves: 201ms of cumulative latency; 109ms of cumulative download. 4,362ms of cumulative latency; 240ms of cumulative download. When we talk about downloading files, we—generally speaking—have two things to consider: latency and bandwidth. It gets worse.

Cache 291
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

The Speed of Time

Brendan Gregg

A Cassandra database cluster had switched to Ubuntu and noticed write latency increased by over 30%. Measuring the speed of time Is there already a microbenchmark for os::javaTimeMillis()? I've shared many posts about superpower observability tools, but often humble hacking is just as effective. Try changing the kernel clocksource.

Speed 126
article thumbnail

Crucial Redis Monitoring Metrics You Must Watch

Scalegrid

Key Takeaways Critical performance indicators such as latency, CPU usage, memory utilization, hit rate, and number of connected clients/slaves/evictions must be monitored to maintain Redis’s high throughput and low latency capabilities. Similarly, an increased throughput signifies an intensive workload on a server and a larger latency.

Metrics 130
article thumbnail

Redis® Monitoring Strategies for 2024

Scalegrid

Identifying key Redis® metrics such as latency, CPU usage, and memory metrics is crucial for effective Redis monitoring. To monitor Redis® instances effectively, collect Redis metrics focusing on cache hit ratio, memory allocated, and latency threshold. Providing them with clear insights into their system’s performance overall.

Strategy 130
article thumbnail

Time to First Byte: What It Is and Why It Matters

CSS Wizardry

The first—and often most surprising for people to learn—thing that I want to draw your attention to is that TTFB counts one whole round trip of latency. The reason is because mobile networks are, as a rule, high latency connections. Last mile latency deals with the disproportionate complexity toward the terminus of a connection.

Latency 269
article thumbnail

The Speed of Time

Brendan Gregg

A Cassandra database cluster had switched to Ubuntu and noticed write latency increased by over 30%. Measuring the speed of time Is there already a microbenchmark for os::javaTimeMillis()? I've shared many posts about superpower observability tools, but often humble hacking is just as effective. Try changing the kernel clocksource.

Speed 52