article thumbnail

Setting Up and Deploying PostgreSQL for High Availability

Percona

Such a proxy acts as the traffic cop between the applications and the database servers. You can get more details — and view actual architectures — at the Percona Highly Available PostgreSQL web page or by downloading our white paper, Percona Distribution for PostgreSQL: High Availability With Streaming Replication.

article thumbnail

Total Cost of Ownership and the Return on Agility - All Things.

All Things Distributed

This white paper provides a detailed view of the comparable costs of running various workloads on-premises and in the AWS Cloud and offers guidance on how to work out an apples-to-apples comparison of the TCO of running web applications in the AWS Cloud versus running them on-premises.Below are some highlights of the Web Applications white paper.

AWS 118
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Measuring Carbon is Not Enough?—?Unintended Consequences

Adrian Cockcroft

A rough guide if you don’t have any better data is that with no traffic to a system it will be 10% utilization and use 30% of peak power, 25% utilization uses 50% of peak power, and at 50% utilization it uses 75% of peak power. Load peaks can be caused by inefficient initialization code at startup, cron jobs, traffic spikes, or retry storms.

Energy 52
article thumbnail

An Introduction to MySQL Replication: Exploring Different Types of MySQL Replication Solutions

Percona

When a problem is detected, these tools automatically redirect traffic to a standby replica. To delve deeper into this topic and explore Percona’s recommendations for HA architecture and deployment, we invite you to download our white paper.

Servers 52
article thumbnail

MongoDB Best Practices: Security, Data Modeling, & Schema Design

Percona

This allows MongoDB to scale horizontally, handling large datasets and high traffic loads. In MongoDB, sharding is achieved by creating shards, each of which contains a subset of the data, which are then distributed across multiple machines in a cluster, with each machine hosting one or more shards.