Remove 2013 Remove Architecture Remove Cache Remove Latency
article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Ford, et al., “TCP

article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Ford, et al., “TCP

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

The Performance Inequality Gap, 2021

Alex Russell

A then-representative $200USD device had 4-8 slow (in-order, low-cache) cores, ~2GiB of RAM, and relatively slow MLC NAND flash storage. The cheapest (high volume) Androids perform like 2012/2013 iPhones, respectively. The Moto G4 , for example. twitter.com/slightlylate/status/1139684093602349056 455 06:18 AM · Feb 28, 2020.

article thumbnail

Front-End Performance Checklist 2021

Smashing Magazine

Defining The Environment Choosing a framework, baseline performance cost, Webpack, dependencies, CDN, front-end architecture, CSR, SSR, CSR + SSR, static rendering, prerendering, PRPL pattern. Estimated Input Latency tells us if we are hitting that threshold, and ideally, it should be below 50ms. Large preview ). Shipped in Next.js

article thumbnail

Front-End Performance Checklist 2020 [PDF, Apple Pages, MS Word]

Smashing Magazine

Estimated Input Latency tells us if we are hitting that threshold, and ideally, it should be below 50ms. Designed for the modern web, it responds to actual congestion, rather than packet loss like TCP does, it is significantly faster , with higher throughput and lower latency — and the algorithm works differently.