Remove 2013 Remove Benchmarking Remove Latency Remove Storage
article thumbnail

InnoDB Performance Optimization Basics

Percona

This blog is in reference to our previous ones for ‘Innodb Performance Optimizations Basics’ 2007 and 2013. Storage The type of storage and disk used for database servers can have a significant impact on performance and reliability. Benchmark before you decide. Transparent huge pages (THP) disabled.

article thumbnail

The Performance Inequality Gap, 2021

Alex Russell

A then-representative $200USD device had 4-8 slow (in-order, low-cache) cores, ~2GiB of RAM, and relatively slow MLC NAND flash storage. The cheapest (high volume) Androids perform like 2012/2013 iPhones, respectively. The Moto G4 , for example. twitter.com/slightlylate/status/1139684093602349056 455 06:18 AM ยท Feb 28, 2020.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Progress Delayed Is Progress Denied

Alex Russell

As an engineer on a browser team, I'm privy to the blow-by-blow of various performance projects, benchmark fire drills, and the ways performance marketing (deeply) impacts engineering priorities. With each team, benchmarks lost are understood as bugs. Browser release notes and caniuse tables since Blink forked from WebKit in 2013 [7].

Media 145
article thumbnail

Front-End Performance Checklist 2020 [PDF, Apple Pages, MS Word]

Smashing Magazine

Estimated Input Latency tells us if we are hitting that threshold, and ideally, it should be below 50ms. Designed for the modern web, it responds to actual congestion, rather than packet loss like TCP does, it is significantly faster , with higher throughput and lower latency — and the algorithm works differently.

article thumbnail

Front-End Performance Checklist 2021

Smashing Magazine

Estimated Input Latency tells us if we are hitting that threshold, and ideally, it should be below 50ms. Designed for the modern web, it responds to actual congestion, rather than packet loss like TCP does, it is significantly faster , with higher throughput and lower latency — and the algorithm works differently.