SQL Server Hardware Performance Tuning

SQL Shack

SQL Server Performance Tuning can be a difficult assignment, especially when working with a massive database where even the minor change can raise a significant impact on the existing query performance.

SQL Server Hardware Optimization

SQL Server Performance

An important concern in optimizing the hardware platform is hardware components that restrict performance, known as bottlenecks. General DBA Performance Tuning hardwareQuite often, the problem isn’t correcting performance bottlenecks as much as it is identifying them in the first place. Start with obtaining a performance baseline. You monitor the server over time so that you can determine Server average […].

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

An open-source benchmark suite for microservices and their hardware-software implications for cloud & edge systems

The Morning Paper

An open-source benchmark suite for microservices and their hardware-software implications for cloud & edge systems Gan et al., The paper examines the implications of microservices at the hardware, OS and networking stack, cluster management, and application framework levels, as well as the impact of tail latency. Hardware implications. ASPLOS’19.

Using hardware performance counters to determine how often both logical processors are active on an Intel CPU

John McCalpin

Most Intel microprocessors support “HyperThreading” (Intel’s trademark for their implementation of “simultaneous multithreading”) — which allows the hardware to support (typically) two “Logical Processors” for each physical core. Last year I was trying to diagnose a mild slowdown in a code, and wanted to be able to use the hardware performance counters to divide processor activity into four categories: Neither Logical Processor active.

C&B Session: atomic Weapons – The C++11 Memory Model and Modern Hardware

Sutter's Mill

atomic<> Weapons: The C++11 Memory Model and Modern Hardware. Achingly, heartbreakingly clear, because some hardware incents you to pull out the big guns to achieve top performance, and C++ programmers just are so addicted to full performance that they’ll reach for the big red levers with the flashing warning lights. We’ll include clear answers to several FAQs: “how do the compiler and hardware cooperate to remember how to respect these rules?”, “what is a race condition?”,

Achieving 100Gbps intrusion prevention on a single server

The Morning Paper

So we need low latency, but we also need very high throughput: A recurring theme in IDS/IPS literature is the gap between the workloads they need to handle and the capabilities of existing hardware/software implementations. Uncategorized Hardware Networking

SKP's Java/Java EE Gotchas: Clash of the Titans, C++ vs. Java!

DZone

performance c++ programming languages core java computing computer hardware programming & design complexity metrics optimization and algorithmic aspects platform independenceAs a Software Engineer, the mind is trained to seek optimizations in every aspect of development and ooze out every bit of available CPU Resource to deliver a performing application.

Java 138

Identifying Optane drives in Linux

n0derunner

Storage Hardware and Devices nvme optaneThe easiest way to identify NVME drives backed by either NAND flash or Optane is to use $ lspci -v The output will look like this for NVME/NAND 00:0d.0

A Brief Guide of xPU for AI Accelerators

ACM Sigarch

HPU: Holographic Processing Unit (HPU) is the specific hardware of Microsoft’s Hololens. SPU: Stream Processing Unit (SPU) is related to the specialized hardware to process the data streams of video. TPU: Tensor Processing Unit (TPU) is Google’s specialized hardware for neural network. Movidius, which was acquired by Intel in 2016, develops its VPU-series named Myriad which makes hardware optimization for computer vision tasks.

Using Machine Learning to Ensure the Capacity Safety of Individual Microservices

Uber Engineering

Architecture Capacity Capacity Safety Hardware Capacity Planning Reliability site reliability engineering SRE Uber Eng Uber EngineeringReliability engineering teams at Uber build the tools, libraries, and infrastructure that enable engineers to operate our thousands of microservices reliably at scale.

A persistent problem: managing pointers in NVM

The Morning Paper

Byte-addressable non-volatile memory,) NVM will fundamentally change the way hardware interacts, the way operating systems are designed, and the way applications operate on data. " Uncategorized Hardware Operating SystemsA persistent problem: managing pointers in NVM Bittman et al.,

Peloton: Uber’s Unified Resource Scheduler for Diverse Cluster Workloads

Uber Engineering

Cluster management, a common software infrastructure among technology companies, aggregates compute resources from a collection of physical hosts into a shared resource pool, amplifying compute power and allowing for the flexible use of data center hardware. Architecture Apache Hadoop Apache Spark Big Data Capacity Planning Cassandra Cluster Management Data Center Hardware MySQL Peloton Redis Uber Uber Engineering Unified Resource Scheduler Workload Cluster

Compress objects, not cache lines: an object-based compressed memory hierarchy

The Morning Paper

… to realize these insights, hardware needs to access data at object granularity and must have control over pointers between objects. Hotpads is a hardware-managed hierarchy of scratchpad-like memories called pads. Collection evictions that move objects up the hierarchy occur entirely in hardware and are much faster than software GT because pads are small. Uncategorized Hardware Operating Systems

Cache 62

Efficient lock-free durable sets

The Morning Paper

Uncategorized Algorithms and data structures HardwareEfficient lock-free durable sets Zuriel et al., OOPSLA’19. Given non-volatile memory (NVRAM), the naive hope for persistence is that it would be a no-op: what happens in memory, stays in memory.

An empirical guide to the behavior and use of scalable persistent memory

The Morning Paper

EWR is the ratio of bytes issue by the iMC divided by the number of bytes actually written to the 3D-XPoint media (as measured by the DIMM’s hardware counters). Uncategorized Hardware PerformanceAn empirical guide to the behavior and use of scalable persistent memory , Yang et al.,

Approaches to System Security: Using Cryptographic Techniques to Minimize Trust

ACM Sigarch

This is the first post in a series of posts on different approaches to systems security especially as they apply to hardware and architectural security. The class of techniques described in this blog post, which we broadly refer to as applied hardware and architecture cryptography, apply proven cryptographic techniques to strengthen systems. Naively securing this system would require a large amount of trust; “guns and guards”, trusted personnel, and trusted software and hardware.

Why I hate MPI (from a performance analysis perspective)

John McCalpin

According to Dr. Bandwidth, performance analysis has two recurring themes: How fast should this code (or “simple” variations on this code) run on this hardware? This can start with either a “top-down” or “bottom-up” approach, but in complex codes running on complex hardware, what is really required is both approaches — iterated until the interactions between all the components are understood. The networking hardware.

Boosted race trees for low energy classification

The Morning Paper

The goal is to produce a low-energy hardware classifier for embedded applications doing local processing of sensor data. Race logic has four primary operations that are easy to implement in hardware: MAX, MIN, ADD-CONSTANT, and INHIBIT. One efficient way of doing that in analog hardware is the use of current-starved inverters. Uncategorized Hardware Machine LearningBoosted race trees for low energy classification Tzimpragos et al., ASPLOS’19.

James Hamilton on reliability

Sutter's Mill

Don’t trust hardware or software; then you can build trustworthy hardware and software. Hardware Software DevelopmentJames Hamilton on how to write reliable software in a world where anything that can fail, will fail.

Peloton: Uber’s Unified Resource Scheduler for Diverse Cluster Workloads

Uber Engineering

Cluster management, a common software infrastructure among technology companies, aggregates compute resources from a collection of physical hosts into a shared resource pool, amplifying compute power and allowing for the flexible use of data center hardware. Architecture Apache Hadoop Apache Spark Big Data Capacity Planning Cassandra Cluster Management Data Center Hardware MySQL Peloton Redis Uber Uber Engineering Unified Resource Scheduler Workload Cluster

Invited Talk at SuperComputing 2016!

John McCalpin

Computer Architecture Computer Hardware Performance cache DRAM high performance computing memory bandwidth memory latency STREAM benchmark“Memory Bandwidth and System Balance in HPC Systems” If you are planning to attend the SuperComputing 2016 conference in Salt Lake City next month, be sure to reserve a spot on your calendar for my talk on Wednesday afternoon (4:15pm-5:00pm).

A peculiar throughput limitation on Intel’s Xeon Phi x200 (Knights Landing)

John McCalpin

Hardware performance counter results for a simple benchmark code calling Intel’s optimized DGEMM implementation for this processor (from the Intel MKL library) show that about 20% of the dynamic instruction count consists of instructions that are not packed SIMD operations (i.e., This is an uninspiring fraction of peak performance that would normally suggest significant inefficiencies in either the hardware or software.

Intel discloses “vector+SIMD” instructions for future processors

John McCalpin

It seems very likely that the hardware has to be able to merge these two load operations into a single L1 Data Cache access to keep the rate of cache accesses from being the performance bottleneck. But 2 32-bit loads is only 1/8 of a natural 512-bit cache access, and it seems unlikely that the hardware can merge cache accesses across multiple cycles. Algorithms Computer Architecture Computer Hardware Performance arithmetic high performance computing microprocessors

Cache 40

Memory Latency on the Intel Xeon Phi x200 “Knights Landing” processor

John McCalpin

Cache Coherence Implementations Computer Architecture Computer Hardware Performance memory bandwidth memory latency Xeon PhiThe Xeon Phi x200 (Knights Landing) has a lot of modes of operation (selected at boot time), and the latency and bandwidth characteristics are slightly different for each mode.

From bare-metal to Kubernetes

High Scalability

Hardware infrastructure. This is a guest post by Hugues Alary , Lead Engineer at Betabrand , a retail clothing company and crowdfunding platform, based in San Francisco. This article was originally published here. Early infrastructure. Rackspace. The scalability and maintainability issue. Scaling development processes. The advent of Docker. Kubernetes. Learning Kubernetes. Officially migrating. The development/staging environments. A year after. kubernetes

Retail 211

The Performance Inequality Gap, 2021

Alex Russell

Hardware Past As Performance Prologue. Regardless, the overall story for hardware progress remains grim, particularly when we recall how long device replacement cycles are: Tap for a larger version.

Stuff The Internet Says On Scalability For April 30th, 2021

High Scalability

This channel is the perfect blend of programming, hardware, engineering, and crazy. Hey, HighScalability is back! After watching you’ll feel inadequate, but in an entertained sort of way. Love this Stuff? I need your support on Patreon to keep this stuff going.

Adaptive Loading - Improving web performance on low-end devices

Addy Osmani

Adaptive Loading is a pattern for delivering a fast core experience to all users (including low-end devices) where you progressively add high-end-only features, if a user's network and hardware can handle it

Talk Video: Welcome to the Jungle

Sutter's Mill

Now welcome to the hardware jungle. Concurrency Hardware Software Development Talks & Events WebLast month in Kansas City I gave a talk on “Welcome to the Jungle,” based on my recent essay of the same name (sequel to “The Free Lunch Is Over”) concerning the turn to mainstream heterogeneous distributed computing and the end of Moore’s Law.

Cloud 40

Keynote at the AMD Fusion Developer Summit

Sutter's Mill

We know that getting full computational performance out of most machines—nearly all desktops and laptops, most game consoles, and the newest smartphones—already means harnessing local parallel hardware, mainly in the form of multicore CPU processing. You can expect the above keynote to be, well, keynote-y… oriented toward software product features and of course AMD’s hardware, with plenty of forward-looking industry vision style material.

Automation of Business Transaction Reconciliation

DZone

The fail-over condition arises due to uncontrolled network failure, OS failure, hardware failure or DR drill. Reducing time for disaster recovery is crucial for organizations needing to achieve operational resiliency.

Welcome to the Jungle

Sutter's Mill

Now welcome to the hardware jungle. For the first time in the history of computing, mainstream hardware is no longer a single-processor von Neumann machine, and never will be again. Concurrency Hardware Opinion & Editorial Software DevelopmentWith so much happening in the computing world, now seemed like the right time to write “Welcome to the Jungle” – a sequel to my earlier “The Free Lunch Is Over” essay. Here’s the introduction: Welcome to the Jungle.

Games 40

Planning Your API Roadmap

DZone

The very flexible nature of the technology opens many doors, including business collaborations, reuse in third-party products, or even conquering hardware barriers by reaching a spectrum of devices. Introduction. APIs — the current “big thing” — offer the opportunity for modern organizations to unlock new and lucrative business models. The article below covers some tips on how to spin the API flywheel and leverage its possibilities.

Two Sessions: C++ Concurrency and Parallelism – 2012 State of the Art (and Standard)

Sutter's Mill

Mainstream hardware – many kinds of parallelism: What’s the relationship among multi-core CPUs, hardware threads , SIMD vector units (Intel SSE and AVX , ARM Neon ), and GPGPU (general-purpose computation on GPUs, which I covered at C++ and Beyond 2011 )? Task and data parallelism: What’s the difference between task parallelism and data parallelism, which kind of of hardware does each allow you to exploit, and why?

C++ 40

C++ AMP keynote is online

Sutter's Mill

Portable: It allows shipping a single EXE that can use any combination of GPU vendors’ hardware. The initial implementation uses DirectCompute and supports all devices that are DX11 capable; DirectCompute is just an implementation detail of the first release, and the model can (and I expect will) be implemented to directly talk to any interesting hardware. More to come… C++ Concurrency Hardware Microsoft Software Development Talks & Events

C++ 40

Configuration Testing – An Introduction

Testlodge

These systems are a combination of different hardware and software which have been configured to perform the desired task. Configuration testing is performed to discover the optimum combinations of software and hardware specifications that allow the system to work without flaws.

Talk Video: Welcome to the Jungle (60 min version + Q&A)

Sutter's Mill

Now welcome to the hardware jungle. Cloud Concurrency Hardware Software Development Talks & EventsWhile visiting Facebook earlier this month, I gave a shorter version of my “Welcome to the Jungle” talk, based on the eponymous WttJ article. They made a nice recording and it’s now available online here: Facebook Engineering. Title: Herb Sutter: Welcome to the Jungle.

Faster remainders when the divisor is a constant: beating compilers and libdivide

Daniel Lemire

The division by a power of two ( / (2 N )) can be implemented as a right shift if we are working with unsigned integers, which compiles to single instruction: that is possible because the underlying hardware uses a base 2. Not all instructions on modern processors cost the same. Additions and subtractions are cheaper than multiplications which are themselves cheaper than divisions. For this reason, compilers frequently replace division instructions by multiplications.

Two More C&B Sessions: C++0x Memory Model (Scott) and Exceptional C++0x (me)

Sutter's Mill

C++ Hardware Software Development Talks & EventsScott Meyers, Andrei Alexandrescu and I are continuing to craft and announce the technical program for C++ and Beyond (C&B) 2011 , and two more sessions are now posted. All talks are brand-new material created specifically for C&B 2011. Here are short blurbs; follow the links for longer descriptions.

C++ 40

QA Mentor Helps Clients Optimize Apps and Websites

QAMentor

The technology industry has made leaps and bounds in the last decade- in fact, so much that it’s hard to make sure all the new software and hardware available is safe and of good quality. Application testing has become vital in recent years to ensure that a platform is safe whether it be with location … The post QA Mentor Helps Clients Optimize Apps and Websites appeared first on QA Mentor Blog. Software Testing

Software Interrupt Time – ‘si’ Time in top

DZone

CPU consumption in Unix/Linux operating systems is studied using eight different metrics: User CPU time , System CPU time , nice CPU time , Idle CPU time , Waiting CPU time , Hardware Interrupt CPU time , Software Interrupt CPU time , Stolen CPU time.

The future of synthetic testing is in the cloud

Dynatrace

When we wanted to add a location, we had to ship hardware and get someone to install that hardware in a rack with power and network. Hardware was outdated. Fixed hardware is a single point of failure – even when we had redundant machines. I remember when we would sign a new customer and they wanted to add hundreds, or thousands of tests, we had to slow them down so we had time to add more hardware. Keep hardware and browsers updated at all times.

Cloud 169

Reinventing virtualization with the AWS Nitro System

All Things Distributed

This realization forced us to rethink everything and became the spark for our creating the Nitro System, the first infrastructure platform to offload virtualization functions to dedicated hardware and software.