Remove Design Remove Hardware Remove Latency Remove Presentation
article thumbnail

Crucial Redis Monitoring Metrics You Must Watch

Scalegrid

Key Takeaways Critical performance indicators such as latency, CPU usage, memory utilization, hit rate, and number of connected clients/slaves/evictions must be monitored to maintain Redis’s high throughput and low latency capabilities. Similarly, an increased throughput signifies an intensive workload on a server and a larger latency.

Metrics 130
article thumbnail

USENIX LISA2021 Computing Performance: On the Horizon

Brendan Gregg

This was a chance to talk about other things I've been working on, such as the present and future of hardware performance. The video is on [youtube]: The slides are on [slideshare] or as a [PDF]: I work on many areas of performance, but recently I've had a lot of demand to talk about BPF. Ford, et al., “TCP

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

5.5 mm in 1.25 nanoseconds

Randon ASCII

That meant I started having regular meetings with the hardware engineers who were working with IBM on the CPU which gave me even more expertise on this CPU, which was critical in helping me discover a design flaw in one of its instructions , and in helping game developers master this finicky beast. So, anyway. Standard stuff.

Cache 126
article thumbnail

Current status, needs, and challenges in Heterogeneous and Composable Memory from the HCM workshop (HPCA’23)

ACM Sigarch

However, building and utilizing HCM presents challenges, including interconnecting various memory technologies (e.g., There are three common mechanisms to access remote memory: modifying applications, modifying virtual memory, and hardware-level cache coherence support. About CXL hardware availability with academia.

Latency 52
article thumbnail

Predictive CPU isolation of containers at Netflix

The Netflix TechBlog

Because microprocessors are so fast, computer architecture design has evolved towards adding various levels of caching between compute units and the main memory, in order to hide the latency of bringing the bits to the brains. This avoids thrashing caches too much for B and evens out the pressure on the L3 caches of the machine.

Cache 251
article thumbnail

Towards a Reliable Device Management Platform

The Netflix TechBlog

Complementing the hardware is the software on the RAE and in the cloud, and bridging the software on both ends is a bi-directional control plane. When a new hardware device is connected, the Local Registry detects and collects a set of information about it, such as networking information and ESN.

Latency 213
article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Ford, et al., “TCP