Remove Efficiency Remove Google Remove Latency Remove Video
article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

The video is now on [YouTube]: The slides are [online] and as a [PDF]: first prev next last / permalink/zoom In Q&A I was asked about CXL (compute express link) which was fortunate as I had planned to cover it and then forgot, so the question let me talk about it (although Q&A is missing from the video). Ford, et al., “TCP

article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

The video is now on YouTube : The slides are online and as a PDF : first prev next last / permalink/zoom In Q&A I was asked about CXL (compute express link) which was fortunate as I had planned to cover it and then forgot, so the question let me talk about it (although Q&A is missing from the video). Ford, et al., “TCP

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

What is serverless computing? Driving efficiency without sacrificing observability

Dynatrace

VMware commercialized the idea of virtual machines, and cloud providers embraced the same concept with services like Amazon EC2, Google Compute, and Azure virtual machines. REST APIs, authentication, databases, email, and video processing all have a home on serverless platforms. This creates latency when they need to restart.

article thumbnail

Service level objectives: 5 SLOs to get started

Dynatrace

Note : you might hear the term latency used instead of response time. Both latency and response time are critical to ensure reliability. Latency typically refers to the time it takes for a single request to travel from its source to its destination. Latency primarily focuses on the time spent in transit.

Latency 177
article thumbnail

Service level objective examples: 5 SLO examples for faster, more reliable apps

Dynatrace

Note : you might hear the term latency used instead of response time. Both latency and response time are critical to ensure reliability. Latency typically refers to the time it takes for a single request to travel from its source to its destination. Latency primarily focuses on the time spent in transit.

Traffic 173
article thumbnail

Five Data-Loading Patterns To Improve Frontend Performance

Smashing Magazine

There are millions of sites, and you are in close competition with every one of those Google search query results. Continue reading below ↓ Meet Smashing Online Workshops on front-end & UX , with practical takeaways, live sessions, video recordings and a friendly Q&A. Agustinus Theodorus. 2022-09-27T14:00:00+00:00.

Cache 128
article thumbnail

Bulldozer: Batch Data Moving from Data Warehouse to Online Key-Value Stores

The Netflix TechBlog

Data scientists and engineers collect this data from our subscribers and videos, and implement data analytics models to discover customer behaviour with the goal of maximizing user joy. The data warehouse is not designed to serve point requests from microservices with low latency. Moving data with Bulldozer at Netflix.

Latency 243