Remove 2008 Remove Efficiency Remove Hardware Remove Latency
article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Ford, et al., “TCP

article thumbnail

File systems unfit as distributed storage backends: lessons from ten years of Ceph evolution

The Morning Paper

Breaking that assumption allowed Ceph to introduce a new storage backend called BlueStore with much better performance and predictability, and the ability to support the changing storage hardware landscape. Readers of this blog properly have a pretty good idea what ‘efficient transactions’ and ‘fast metadata operations’ are all about.

Storage 64
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Ford, et al., “TCP

article thumbnail

Solaris to Linux Migration 2017

Brendan Gregg

Nowadays, there are three built-in tracers that you should know about: - **ftrace**: since 2008, this serves many tracing needs, and has been enhanced recently with hist triggers for custom histograms. Here's some output from my zfsdist tool, in bcc/BPF, which measures ZFS latency as a histogram on Linux: # zfsdist.