Remove c
article thumbnail

Analyzing a High Rate of Paging

Brendan Gregg

Reads usually have apps waiting on them; writes may not (write-back caching). Hit Ctrl-C to end. ^C C msecs : count distribution 0 -> 1 : 83 | | 2 -> 3 : 20 | | 4 -> 7 : 0 | | 8 -> 15 : 41 | | 16 -> 31 : 1620 | * | 32 -> 63 : 8139 | *| 64 -> 127 : 176 | | 128 -> 255 : 95 | | 256 -> 511 : 61 | | 512 -> 1023 : 93 | |.

Cache 105
article thumbnail

How To Add eBPF Observability To Your Product

Brendan Gregg

cachestat File system cache statistics line charts. Then, having discovered everything is C or Python, some rewrite it all in a different language. For a more recent example, I wrote cachestat(8) while on vacation in 2014 for use on the Netflix cloud, which was a mix of Linux 3.2 execsnoop New processes (via exec(2)) table.

Latency 145
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Speeding Up Linux Kernel Builds With ccache

O'Reilly Software

ccache, the compiler cache, is a fantastic way to speed up build times for C and C++ code that I previously recommended. Usually when this happens with ccache, there’s something non-deterministic about the builds that prevents cache hits.

Speed 40
article thumbnail

Analyzing a High Rate of Paging

Brendan Gregg

100.00 ^C I'm looking at the r_await column in particular: the average wait time for reads in milliseconds. Reads usually have apps waiting on them; writes may not (write-back caching). Hit Ctrl-C to end. ^C Hit Ctrl-C to end. ^C This is a 64-Gbyte memory system, and 48 Gbytes is in the page cache (the file system cache).

Cache 40
article thumbnail

Speeding up Linux kernel builds with ccache

Nick Desaulniers

ccache , the compiler cache, is a fantastic way to speed up build times for C and C++ code that I previously recommended. Usually when this happens with ccache, there’s something non-deterministic about the builds that prevents cache hits. Cold Cache. kB max cache size 5.0 kB max cache size 5.0

Speed 46
article thumbnail

Three Other Models of Computer System Performance: Part 2

ACM Sigarch

The M/M/1 queue will show us a required trade-off among (a) allowing unscheduled task arrivals, (b) minimizing latency, and (c) maximizing throughput. For the previous cache miss buffer example, the 32-buffer answer is minimal for 100-ns average miss latency. While Little’s Law provides a black-box result, it does not expose tradeoffs.

Systems 53
article thumbnail

Three Other Models of Computer System Performance: Part 1

ACM Sigarch

For example, how many buffers must a cache have to record outstanding misses if it receives 2 memory references per cycle at 2.5 In our second blog post , we will present the M/M/1 queue that confronts us with a stark, required trade-off among (a) allowing unscheduled task arrivals, (b) minimizing latency, and (c) maximizing throughput.

Systems 60