Bandwidth or Latency: When to Optimise for Which

CSS Wizardry

When it comes to network performance, there are two main limiting factors that will slow you down: bandwidth and latency. Latency is defined as…. Where bandwidth deals with capacity, latency is more about speed of transfer 2. and reduction in latency. more than latency.

Uber’s Big Data Platform: 100+ Petabytes with Minute Latency

Uber Engineering

To accomplish this, Uber relies heavily on making data-driven decisions at every level, from forecasting rider demand during high traffic events to identifying and addressing bottlenecks … The post Uber’s Big Data Platform: 100+ Petabytes with Minute Latency appeared first on Uber Engineering Blog. Uber is committed to delivering safer and more reliable transportation across our global markets.

RSocket vs. gRPC Benchmark

DZone

java performance scalability latency cpu grpc rpc qpsAlmost every time I present RSocket to an audience, there will be someone asking the question: "How does RSocket compare to gRPC?" " Today we are going to find out.

Memory Latency on the Intel Xeon Phi x200 “Knights Landing” processor

John McCalpin

The Xeon Phi x200 (Knights Landing) has a lot of modes of operation (selected at boot time), and the latency and bandwidth characteristics are slightly different for each mode. It is also important to remember that the latency can be different for each physical address, depending on the location of the requesting core, the location of the coherence agent responsible for that address, and the location of the memory controller for that address. MCDRAM maximum latency (ns) 156.1

This spring: High-Performance and Low-Latency C++ (Stockholm) and ACCU (Bristol)

Sutter's Mill

Tue-Thu Apr 25-27: High-Performance and Low-Latency C++ (Stockholm). On April 25-27, I’ll be in Stockholm (Kista) giving a three-day seminar on “High-Performance and Low-Latency C++.” This intensive three day course will provide developers with the knowledge and skills required to write high-performance and low-latency code on today’s modern systems using modern C++11/14/17.

Latency: Will it undermine the most interesting 5G use cases?

VoltDB

Unfortunately, this means that the age-old Telco bugbears will rear their ugly heads again, including latency. 5G, as a fundamental requirement, mandates a 1 millisecond latency from the datasource to its destination. With the 5G revolution, operators will need to manage hundreds of edge deployments, and maintain the physical space and hardware to achieve 1ms of latency. This requires 1 ms network latency.

Self-Host Your Static Assets

CSS Wizardry

Every new origin we need to visit needs a connection opening, and that can be very costly: DNS resolution, TCP handshakes, and TLS negotiation all add up, and the story gets worse the higher the latency of the connection is.

Cache 284

Invited Talk at SuperComputing 2016!

John McCalpin

Computer Architecture Computer Hardware Performance cache DRAM high performance computing memory bandwidth memory latency STREAM benchmark“Memory Bandwidth and System Balance in HPC Systems” If you are planning to attend the SuperComputing 2016 conference in Salt Lake City next month, be sure to reserve a spot on your calendar for my talk on Wednesday afternoon (4:15pm-5:00pm).

Time to First Byte: What It Is and Why It Matters

CSS Wizardry

The first—and often most surprising for people to learn—thing that I want to draw your attention to is that TTFB counts one whole round trip of latency. The reason is because mobile networks are, as a rule, high latency connections.

Why Telcos Need a Real-Time Analytics Strategy

VoltDB

No Compromises Performance Personalization Real-time Vlog low latency real-time analytics telco telco strategy telecomHistorically, telco analytics have been limited and difficult. Telco networks and the systems that support those networks are some of the most advanced technology solutions in existence.

Expanding the Cloud: Faster, More Flexible Queries with DynamoDB

All Things Distributed

While DynamoDB already allows you to perform low-latency queries based on your tableâ??s This gives you the ability to perform richer queries while still meeting the low-latency demands of responsive, scalable applications. All Things Distributed.

Games 76

Memory-Optimized TempDB Metadata in SQL Server 2019

SQL Shack

TempDB is one of the biggest sources of latency in […]. Introduction In-memory technologies are one of the greatest ways to improve performance and combat contention in computing today.

Extending Vector with eBPF to inspect host and container performance

The Netflix TechBlog

Today we are excited to announce latency heatmaps and improved container support for our on-host monitoring solution?—?Vector?—?to Remotely view real-time process scheduler latency and tcp throughput with Vector and eBPF What is Vector?

Making Cloud.typography Fast(er)

CSS Wizardry

Although this response has a 0B filesize, we will always take the latency hit on every single page view (and this response is basically 100% latency). com , which introduces yet more latency for the connection setup.

RPCValet: NI-driven tail-aware balancing of µs-scale RPCs

The Morning Paper

Last week we learned about the [increased tail-latency sensitivity of microservices based applications with high RPC fan-outs. Seer uses estimates of queue depths to mitigate latency spikes on the order of 10-100ms, in conjunction with a cluster manager.

Predictive CPU isolation of containers at Netflix

The Netflix TechBlog

Because microprocessors are so fast, computer architecture design has evolved towards adding various levels of caching between compute units and the main memory, in order to hide the latency of bringing the bits to the brains.

Cache 275

Three Other Models of Computer System Performance: Part 1

ACM Sigarch

How many buffers are needed to track pending requests as a function of needed bandwidth and expected latency? Can one both minimize latency and maximize throughput for unscheduled work? Recall that latency —in units of time—is the time it takes to do a task (e.g.,

Three Other Models of Computer System Performance: Part 2

ACM Sigarch

How many buffers are needed to track pending requests as a function of needed bandwidth and expected latency? Can one both minimize latency and maximize throughput for unscheduled work? Let L denoted the average total latency to handle a task, equal to Q + S. Low latency ?

Expanding the Cloud - Introducing the AWS Asia Pacific (Tokyo.

All Things Distributed

Japanese companies and consumers have become used to low latency and high-speed networking available between their businesses, residences, and mobile devices. The advanced Asia Pacific network infrastructure also makes the AWS Tokyo Region a viable low-latency option for customers from South Korea. All Things Distributed. Werner Vogels weblog on building scalable and robust distributed systems. Expanding the Cloud - Introducing the AWS Asia Pacific (Tokyo) Region.

Games 50

Cache-Control for Civilians

CSS Wizardry

If, however, there wasn’t a new file on the server, we’ll bring back a 304 header, no new file, but an entire roundtrip of latency. We can completely cut out the overhead of a roundtrip of latency. This means no unnecessary roundtrips spent retrieving 304 responses, which potentially saves us a lot of latency on the critical path ( CSS blocks rendering ). On high latency connections, this saving could be tangible.

Cache 215

Expanding the Cloud - New AWS Region: US-West (Northern.

All Things Distributed

This new Region consists of multiple Availability Zones and provides low-latency access to the AWS services from for example the Bay Area. All Things Distributed. Werner Vogels weblog on building scalable and robust distributed systems.

AWS 52

Google's June 2nd Outage: Their Status Page ? Reality

DZone

From 11:48 to 12:10 latency for at least 50% of requests was significantly higher from us-east1 and us-central1 to GCS regional buckets in us-east1, us-central1, and europe-west2. From 11:48 to 12:03 latency was also elevated for europe-west2 to europe-west2 regional bucket access.

Google 130

Stuff The Internet Says On Scalability For March 1st, 2019

High Scalability

It was made possible by using a low latency of 0.1 seconds, the lower the latency, the more responsive the robot. Wake up! It's HighScalability time: 10 years of AWS architecture increasing simplicity or increasing complexity? Michael Wittig ). Do you like this sort of Stuff?

Employing QUIC Protocol to Optimize Uber’s App Performance

Uber Engineering

To deliver the real-time performance expected from Uber’s users, our mobile apps require low-latency and highly … The post Employing QUIC Protocol to Optimize Uber’s App Performance appeared first on Uber Engineering Blog. Uber operates on a global scale across more than 600 cities, with our apps relying entirely on wireless connectivity from over 4,500 mobile carriers.

An open-source benchmark suite for microservices and their hardware-software implications for cloud & edge systems

The Morning Paper

The paper examines the implications of microservices at the hardware, OS and networking stack, cluster management, and application framework levels, as well as the impact of tail latency. The bottom line shows the tail latency impact in the microservices-based applications.

A case for managed and model-less inference serving

The Morning Paper

Making queries to an inference engine has many of the same throughput, latency, and cost considerations as making queries to a datastore, and more and more applications are coming to depend on such queries. A case for managed and model-less inference serving Yadwadkar et al., HotOS’19.

Automating chaos experiments in production

The Morning Paper

Two failure modes we focus on are a service becoming slower (increase in response latency) or a service failing outright (returning errors). If you’ve read the SRE book you’ve probably come across the “four golden signals” (p60): latency, throughput, error rate, and saturation.

Top 10 Tips for Making the Spark + Alluxio Stack Blazing Fast

DZone

In addition, compute and storage are increasingly being separated causing larger latencies for queries. The Apache Spark + Alluxio stack is getting quite popular particularly for the unification of data access across S3 and HDFS. Alluxio is leveraged as compute-side virtual storage to improve performance. But to get the best performance, like any technology stack, you need to follow the best practices.

Key Considerations for a Modern Database to Operate at Scale

VoltDB

The performance consists of two aspects: throughput and latency. Humans will wait much longer than an API will since APIs have strict latency expectations due to timeouts. But in most cases in modern applications, the application expectation is far less than the baked-in latency.

Fast key-value stores: an idea whose time has come and gone

The Morning Paper

In ProtoCache (a component of a widely used Google application), 27% of its latency when using a traditional S+RInK design came from marshalling/un-marshalling. (We’ve The network latency of fetching data over the network, even considering fast data center networks.

Cache 97

Re-Architecting the Video Gatekeeper

The Netflix TechBlog

This data-propagation latency was unacceptable?—?we The Tangible Result With the data propagation latency issue solved, we were able to re-implement the Gatekeeper system to eliminate all I/O boundaries.

Extending Dynatrace

Dynatrace

With insights from Dynatrace into network latency and utilization of your cloud resources, you can design your scaling mechanisms and save on costly CPU hours. Dynatrace news. Dynatrace monitors your full stack and offers you thousands of metrics with almost zero configuration.

Expanding the AWS Cloud – Introducing the AWS Asia Pacific (Hong Kong) Region

All Things Distributed

AWS customers can now use this Region to serve their end users in Hong Kong SAR at a lower latency, and to comply with any data locality requirements. Today, I am happy to introduce the new AWS Asia Pacific ( Hong Kong ) Region.

The Three Types of Performance Testing

CSS Wizardry

Things always always feel fast when we’re developing because, more often than not, we’re working on high-spec machines on dedicated networks, and also serving from localhost which removes the bulk of the latency and bandwidth issues that a real user would suffer.

MezzFS?—?Mounting object storage in Netflix’s media processing platform

The Netflix TechBlog

And shaving off hours is especially beneficial in latency sensitive workflows, like encoding videos that are released on Netflix the day they are shot. MezzFS?—?Mounting

Media 276

Stuff The Internet Says On Scalability For November 23rd, 2018

High Scalability

Delay is Not an Option: Low Latency Routing in Space , Murat ). Wake up! It's HighScalability time: Curious how SpaceX's satellite constellation works? Here's some fancy FCC reverse engineering magic. Do you like this sort of Stuff? Please support me on Patreon. I'd really appreciate it.

Stuff The Internet Says On Scalability For March 22nd, 2019

High Scalability

µs of replication latency on lossy Ethernet, which is faster than or comparable to specialized replication systems that use programmable switches, FPGAs, or RDMA.". Wake up! It's HighScalability time: Van Gogh? Nope.

How Google PageSpeed Works: Improve Your Score and Search Engine Ranking

CSS - Tricks

Estimated Input Latency. Estimated Input Latency. This article is from my friend Ben who runs Calibre , a tool for monitoring the performance of websites. We use Calibre here on CSS-Tricks to keep an eye on things.

Stuff The Internet Says On Scalability For December 21st, 2018

High Scalability

Tim Bray : How to talk about [Serverless Latency] · To start with, don’t just say “I need 120ms.” Wake up! It's HighScalability time: Have a very scalable Xmas everyone! See you in the New Year. Do you like this sort of Stuff? Please support me on Patreon.

Exercises in Emulation: Xbox 360’s FMA Instruction

Randon ASCII

And, FMA instructions often have lower latency than a multiply followed by an add instruction. On the Xbox 360 CPU the latency and throughput of FMA was the same as for fmul or fadd so using an FMA instead of an fmul followed by a dependent fadd would halve the latency.

Games 62

Database Technology in a Blockchain World

VoltDB

As a result, these iterations of blockchain have extremely low throughput, extremely high latency, and low capacity—none of which is acceptable for, as an example, mission-critical trading systems in financial services. It’s just about official—blockchain has taken over the world.

Expanding the cloud to the Middle East: Introducing the AWS Middle East (Bahrain) Region

All Things Distributed

AZs refer to data centers in separate distinct locations within a single Region that are engineered to be operationally independent of other AZs, with independent power, cooling, physical security, and are connected via a low latency network.

Stuff The Internet Says On Scalability For December 7th, 2018

High Scalability

It's HighScalability time: This is your 1500ms latency in real life situations - pic.twitter.com/guot8khIPX. Wake up! — Ivo Mägi (@ivomagi) November 27, 2018. Do you like this sort of Stuff? Please support me on Patreon. I'd really appreciate it. Know anyone looking for a simple book explaining the cloud? Then please recommend my well reviewed (31 reviews on Amazon and 72 on Goodreads!) book: Explain the Cloud Like I'm 10. They'll love it and you'll be their hero forever.