Remove Blog Remove Database Remove Latency Remove Tuning
article thumbnail

Crucial Redis Monitoring Metrics You Must Watch

Scalegrid

Redis® is an in-memory database that provides blazingly fast performance. This makes it a compelling alternative to disk-based databases when performance is a concern. Redis returns a big list of database metrics when you run the info command on the Redis shell. This blog post lists the important database metrics to monitor.

Metrics 130
article thumbnail

Best Practice for Creating Indexes on your MySQL Tables

Scalegrid

In this blog post, we discuss an approach to optimize the MySQL index creation process in such a way that your regular workload is not impacted. There will be a short duration (tens of seconds) during which you will lose connectivity to your database due to the failover, but this can be overcome by having application-level retries.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

How to Improve MySQL AWS Performance 2X Over Amazon RDS at The Same Cost

Scalegrid

AWS is the #1 cloud provider for open-source database hosting, and the go-to cloud for MySQL deployments. As organizations continue to migrate to the cloud, it’s important to get in front of performance issues, such as high latency, low throughput, and replication lag with higher distances between your users and cloud infrastructure.

AWS 183
article thumbnail

Tuning Autovacuum in PostgreSQL and Autovacuum Internals

Percona

The performance of a PostgreSQL database can be compromised by dead tuples, since they continue to occupy space and can lead to bloat. We provided an introduction to VACUUM and bloat in an earlier blog post. To know more about dead tuples and bloat, please read our previous blog post. Tuning Autovacuum in PostgreSQL.

Tuning 45
article thumbnail

Towards a Reliable Device Management Platform

The Netflix TechBlog

In this blog post, we will focus on the latter feature set. The challenge, then, is to be able to ingest and process these events in a scalable manner, i.e., scaling with the number of devices, which will be the focus of this blog post. In particular, the Kafka integration is the most relevant for this blog post.

Latency 213
article thumbnail

Building Netflix’s Distributed Tracing Infrastructure

The Netflix TechBlog

In our previous blog post we introduced Edgar, our troubleshooting tool for streaming sessions. If we had an ID for each streaming session then distributed tracing could easily reconstruct session failure by providing service topology, retry and error tags, and latency measurements for all service calls.

article thumbnail

Optimizing your Kubernetes clusters without breaking the bank

Dynatrace

Tuning thousands of parameters has become an impossible task to achieve via a manual and time-consuming approach. The optimization goal was to improve the application efficiency, that is to improve the ratio between service throughput and cloud costs while not increasing the application latency (e.g. The Akamas approach.

Latency 202