article thumbnail

USENIX LISA2021 Computing Performance: On the Horizon

Brendan Gregg

I summarized these topics and more as a plenary conference talk, including my own predictions (as a senior performance engineer) for the future of computing performance, with a focus on back-end servers. This was a chance to talk about other things I've been working on, such as the present and future of hardware performance.

article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Or even on a plane.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

This spring: High-Performance and Low-Latency C++ (Stockholm) and ACCU (Bristol)

Sutter's Mill

Tue-Thu Apr 25-27: High-Performance and Low-Latency C++ (Stockholm). On April 25-27, I’ll be in Stockholm (Kista) giving a three-day seminar on “High-Performance and Low-Latency C++.”

Latency 48
article thumbnail

Latency: Will it undermine the most interesting 5G use cases?

VoltDB

One of the big topics at the OpenStack conference was how to prepare and implement a successful transition to 5G. Unfortunately, this means that the age-old Telco bugbears will rear their ugly heads again, including latency. 5G, as a fundamental requirement, mandates a 1 millisecond latency from the datasource to its destination.

Latency 40
article thumbnail

Latency: Will it undermine the most interesting 5G use cases?

VoltDB

One of the big topics at the OpenStack conference was how to prepare and implement a successful transition to 5G. Unfortunately, this means that the age-old Telco bugbears will rear their ugly heads again, including latency. 5G, as a fundamental requirement, mandates a 1 millisecond latency from the datasource to its destination.

Latency 40
article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Or even on a plane. It was a great privilege.

article thumbnail

Current status, needs, and challenges in Heterogeneous and Composable Memory from the HCM workshop (HPCA’23)

ACM Sigarch

There are three common mechanisms to access remote memory: modifying applications, modifying virtual memory, and hardware-level cache coherence support. even lowered the latency by introducing a multi-headed device that collapses switches and memory controllers. About CXL hardware availability with academia.

Latency 52