Remove Cache Remove Cloud Remove Hardware Remove Virtualization
article thumbnail

Seeing through hardware counters: a journey to threefold performance increase

The Netflix TechBlog

While we understand it’s virtually impossible to achieve a linear increase in throughput as the number of vCPUs grow, a near-linear increase is attainable. We also see much higher L1 cache activity combined with 4x higher count of MACHINE_CLEARS. Cache line is a concept similar to memory page?—? Thread 0’s cache in this example.

Hardware 363
article thumbnail

Kubernetes in the wild report 2023

Dynatrace

Modern, cloud-native computing is impossible to separate from containers and Kubernetes adoption. As Kubernetes adoption increases and it continues to advance technologically, Kubernetes has emerged as the “operating system” of the cloud. Kubernetes moved to the cloud in 2022. Kubernetes moved to the cloud in 2022.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Bring Your Own Cloud (BYOC) vs. Dedicated Hosting at ScaleGrid

Scalegrid

Where you decide to host your cloud databases is a huge decision. You have to choose your hosting model, a cloud provider, and then your primary and standby regions to deploy to. What is ScaleGrid’s Bring Your Own Cloud Plan? Here are the databases and cloud providers supported through each model: Supported Databases.

Cloud 242
article thumbnail

Azure Virtual Machines for SQL Server Usage

SQL Performance

One initial, easy step to moving your SQL Server on-premises workloads to the cloud is using Azure VMs to run your SQL Server workloads in an infrastructure as a service (IaaS) scenario. One important choice you will still have to make is what type and size of Azure virtual machine you want to use for your existing SQL Server workload.

Azure 72
article thumbnail

What is a Distributed Storage System

Scalegrid

Key Takeaways Distributed storage systems benefit organizations by enhancing data availability, fault tolerance, and system scalability, leading to cost savings from reduced hardware needs, energy consumption, and personnel. They maintain fault tolerance and redundancy by replicating this information throughout various nodes in the system.

Storage 130
article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

TCP Extensions for Multipath Operation with Multiple Addresses,” [link] Mar 2020 - [Gregg 20] Brendan Gregg, “Systems Performance: Enterprise and the Cloud, Second Edition,” Addison-Wesley, 2020 - [Hruska 20] Joel Hruska, “Intel Demos PCIe 5.0 Ford, et al., “TCP

article thumbnail

The Return of the Frame Pointers

Brendan Gregg

Only in extreme circumstances does the cost (in processor time and I-cache footprint) translate to a tangible benefit - circumstances which usually resort to hand-coded assembly anyway. It shouldn't be 10%, unless it's cache effects. And for leaf routines (which never establish a frame), this is a non-issue.

Java 145