article thumbnail

3 Performance Tricks for Dealing With Big Data Sets

DZone

This article describes 3 different tricks that I used in dealing with big data sets (order of 10 million records) and that proved to enhance performance dramatically. Trick 1: CLOB Instead of Result Set.

Big Data 246
article thumbnail

Data lakehouse innovations advance the three pillars of observability for more collaborative analytics

Dynatrace

As teams try to gain insight into this data deluge, they have to balance the need for speed, data fidelity, and scale with capacity constraints and cost. To solve this problem, Dynatrace launched Grail, its causational data lakehouse , in 2022. And without the encumbrances of traditional databases, Grail performs fast. “In

Analytics 186
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Conducting log analysis with an observability platform and full data context

Dynatrace

Using Grail to heal observability pains Grail logs not only store big data, but also map out dependencies to enable fast analytics and data reasoning. ” Weighing the value and cost of indexed databases vs. Grail With standard index databases, teams must choose relevant indexes before data ingestion.

Analytics 187
article thumbnail

Optimizing anomaly detection and noise

Dynatrace

I took a big-data-analysis approach, which started with another problem visualization. I wanted to understand how I could tune Dynatrace’s problem detection, but to do that I needed to understand the situation first. To achieve that I took two approaches: Visualizing historic problem data via a “Swimlane Visualization”.

Tuning 263
article thumbnail

A guide to Autonomous Performance Optimization

Dynatrace

Stefano started his presentation by showing how much cost and performance optimization is possible when knowing how to properly configure your application runtimes, databases, or cloud environments: Correct configuration of JVM parameters can save up to 75% resource utilization while delivering same or better performance!

article thumbnail

Why MySQL Could Be Slow With Large Tables

Percona

Some startups adopted MySQL in its early days such as Facebook, Uber, Pinterest, and many more, which are now big and successful companies that prove that MySQL can run on large databases and on heavily used sites. It was developed for optimizing data storage and access for big data sets.

article thumbnail

Should You Use ClickHouse as a Main Operational Database?

Percona

What if we use ClickHouse (which is a columnar analytical database) as our main datastore? Well, typically, an analytical database is not a replacement for a transactional or key/value datastore. Although such databases can be very efficient with counts and averages, some queries will be slow or simply non existent. Processed 4.15