Remove 2014 Remove Latency Remove Network Remove Systems
article thumbnail

USENIX LISA2021 Computing Performance: On the Horizon

Brendan Gregg

AWS Graviton2); for memory with the arrival of DDR5 and High Bandwidth Memory (HBM) on-processor; for storage including new uses for 3D Xpoint as a 3D NAND accelerator; for networking with the rise of QUIC and eXpress Data Path (XDP); and so on. I also wrote about these topics in detail for my recent [Systems Performance 2nd Edition] book.

article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

This talk originated from my updates to [Systems Performance 2nd Edition], and this was the first time I've given this talk in person! CXL in a way allows a custom memory controller to be added to a system, to increase memory capacity, bandwidth, and overall performance. Ford, et al., “TCP

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

What is AWS Lambda?

Dynatrace

The 2014 launch of AWS Lambda marked a milestone in how organizations use cloud services to deliver their applications more efficiently, by running functions at the edge of the cloud without the cost and operational overhead of on-premises servers. AWS continues to improve how it handles latency issues. Dynatrace news.

Lambda 178
article thumbnail

Three Other Models of Computer System Performance: Part 1

ACM Sigarch

Computer systems, from the Internet-of-Things devices to datacenters, are complex and optimizing them can enhance capability and save money. Developing simulators, however, is time-consuming and requires a great deal of infrastructure development regarding a prospective system. Answered in Part 2.). Bottleneck Analysis.

Systems 60
article thumbnail

Three Other Models of Computer System Performance: Part 2

ACM Sigarch

How many buffers are needed to track pending requests as a function of needed bandwidth and expected latency? Can one both minimize latency and maximize throughput for unscheduled work? The M/M/1 queue will show us a required trade-off among (a) allowing unscheduled task arrivals, (b) minimizing latency, and (c) maximizing throughput.

Systems 53
article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

This talk originated from my updates to Systems Performance 2nd Edition , and this was the first time I've given this talk in person! CXL in a way allows a custom memory controller to be added to a system, to increase memory capacity, bandwidth, and overall performance. Ford, et al., “TCP

article thumbnail

USENIX LISA2021 Computing Performance: On the Horizon

Brendan Gregg

AWS Graviton2); for memory with the arrival of DDR5 and High Bandwidth Memory (HBM) on-processor; for storage including new uses for 3D Xpoint as a 3D NAND accelerator; for networking with the rise of QUIC and eXpress Data Path (XDP); and so on. I also wrote about these topics in detail for my recent [Systems Performance 2nd Edition] book.