article thumbnail

Compress objects, not cache lines: an object-based compressed memory hierarchy

The Morning Paper

Compress objects, not cache lines: an object-based compressed memory hierarchy Tsai & Sanchez, ASPLOS’19. Existing cache and main memory compression techniques compress data in small fixed-size blocks, typically cache lines. Hotpads is a hardware-managed hierarchy of scratchpad-like memories called pads.

Cache 61
article thumbnail

Crucial Redis Monitoring Metrics You Must Watch

Scalegrid

Effective management of memory stores with policies like LRU/LFU proactive monitoring of the replication process and advanced metrics such as cache hit ratio and persistence indicators are crucial for ensuring data integrity and optimizing Redis’s performance. Cache Hit Ratio The cache hit ratio represents the efficiency of cache usage.

Metrics 130
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

MySQL Capacity Planning

Percona

Or worse yet, sometimes I get questions about regaining normal operations after a traffic increase caused performance destabilization. But we can discuss common bottlenecks, how to assess them, and have a better understanding as to why proactive monitoring is so important when it comes to responding to traffic growth.

Traffic 87
article thumbnail

The Various Methods to Backup and Restore ProxySQL

Percona

The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers. These include runtime parameters, server grouping, and traffic-related settings. The proxy is designed to run continuously without needing to be restarted. Reach out to us today to schedule your instructor-led class!

Traffic 107
article thumbnail

5.5 mm in 1.25 nanoseconds

Randon ASCII

That meant I started having regular meetings with the hardware engineers who were working with IBM on the CPU which gave me even more expertise on this CPU, which was critical in helping me discover a design flaw in one of its instructions , and in helping game developers master this finicky beast. register files? arithmetic units?)

Cache 126
article thumbnail

Predictive CPU isolation of containers at Netflix

The Netflix TechBlog

Because microprocessors are so fast, computer architecture design has evolved towards adding various levels of caching between compute units and the main memory, in order to hide the latency of bringing the bits to the brains. This avoids thrashing caches too much for B and evens out the pressure on the L3 caches of the machine.

Cache 251
article thumbnail

Taiji: managing global user traffic for large-scale Internet services at the edge

The Morning Paper

Taiji: managing global user traffic for large-scale internet services at the edge Xu et al., It’s another networking paper to close out the week (and our coverage of SOSP’19), but whereas Snap looked at traffic routing within the datacenter, Taiji is concerned with routing traffic from the edge to a datacenter. SOSP’19.

Traffic 42