Fri.Jun 28, 2019

article thumbnail

The Importance of QA Testing for Software Development

DZone

Quality assurance testing (QA testing) refers to a company delivering the best possible product or service to the customer by making sure the right processes are in place during development. Companies need to test their software products and analyze them to make sure they meet the market standards and fulfill their established goals. An easy way that companies can implement QA testing into their development is through QA outsourcing.

Software 176
article thumbnail

Elevate your dashboards with the new Dynatrace metrics framework

Dynatrace

Dynatrace news. Dynatrace leverages high-fidelity data to fuel Davis, our AI-driven causation engine for automatic monitoring insights. If you’re already using Davis as a foundation for custom drill-downs or dashboards to answer business-specific questions, you’ll be able to do this even more extensively with Dynatrace version 1.172. We’ve introduced a new framework for metrics that provides (1) a logical tree structure, (2) globally unique metric keys that ease integration bet

Metrics 142
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Managing the Modern IT Environment – Observability Do’s and Don’ts

DZone

Most people think observability is simply a fancier synonym for monitoring. But in the context of modern IT environments, “observability” takes on a much more relevant and distinct role to address new constructs like microservices and service mesh architectures, which have greatly complicated traditional management strategies. It used to be easy. You’d run a client/server model, for example, and you could quickly determine when the server wasn’t responding, or the client wasn’t communicating wit

Strategy 100
article thumbnail

Integrate Dynatrace more easily using the new Metrics REST API

Dynatrace

Dynatrace news. As a full stack monitoring platform, Dynatrace collects a huge number of metrics for each OneAgent monitored host in your environment. Depending on the types of technologies you’re running on your individual hosts, the average number of metrics is about 500 per computational node. Besides all the metrics that originate from your hosts, Dynatrace also collects all the important key performance metrics for services and real-user monitored applications, as well as cloud platfo

Metrics 102
article thumbnail

View from Nutanix storage during Postgres DB benchmark

n0derunner

A quick look at how the workload is seen from the Nutanix CVM. In this example from prior post. The Linux VM running postgres has two virtual disks – one taking transaction log writes. The other is doing reads and writes from the main datafiles. Since the DB is small (50% the size of the Linux RAM) – the database is mostly cached on the read side – so we only see writes going to the DB files.

article thumbnail

Benchmarking with Postgres PT1

n0derunner

Image By Daniel Lundin. In this example, we use Postgres and the pgbench workload generator to drive some load in a virtual machine. Assume a Linux virtual machine that has Postgres installed. Specifically using a Bitnami virtual appliance. Once the VM has been started, connect to the console. Allow access to postgres port 5432 – which is the postgres DB port or allow ssh. $ sudo ufw allow 5432.

article thumbnail

Benchmarking with Postgres PT2

n0derunner

In this example we run pgbench with a scale factor of 1000 which equates to a database size of around 15GB. The linux VM has 32G RAM, so we don’t expect to see many reads. Using prometheus with the Linux node exporter we can see the disk IO pattern from pgbench. As expected the write pattern to the log disk (sda) is quite constant, while the write pattern to the database files (sdb) is bursty. pgbench with DB size 50% of Linux buffer cache.