article thumbnail

What is Greenplum Database? Intro to the Big Data Database

Scalegrid

Greenplum Database is a massively parallel processing (MPP) SQL database that is built and based on PostgreSQL. It can scale towards a multi-petabyte level data workload without a single issue, and it allows access to a cluster of powerful servers that will work together within a single SQL interface where you can view all of the data.

Big Data 321
article thumbnail

3 Performance Tricks for Dealing With Big Data Sets

DZone

This article describes 3 different tricks that I used in dealing with big data sets (order of 10 million records) and that proved to enhance performance dramatically. Trick 1: CLOB Instead of Result Set.

Big Data 246
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

ScyllaDB Trends – How Users Deploy The Real-Time Big Data Database

Scalegrid

ScyllaDB is an open-source distributed NoSQL data store, reimplemented from the popular Apache Cassandra database. We’ve heard a lot about this rising database from the DBA community and our users, and decided to become a sponsor for this years Scylla Summit to learn more about the deployment trends from its users.

Big Data 187
article thumbnail

Understanding the Database Connection Pool (DBCP) Properties

DZone

Recently, I faced an issue related to a very high load on the database layer. The database was having too many connections in parallel. I had to review my application’s database connection pool (DBCP) properties very closely.

Database 245
article thumbnail

In-Stream Big Data Processing

Highly Scalable

The shortcomings and drawbacks of batch-oriented data processing were widely recognized by the Big Data community quite a long time ago. It is clear that distributed in-stream data processing has something to do with query processing in distributed relational databases. Basics of Distributed Query Processing.

Big Data 154
article thumbnail

What is IT operations analytics? Extract more data insights from more sources

Dynatrace

Then, big data analytics technologies, such as Hadoop, NoSQL, Spark, or Grail, the Dynatrace data lakehouse technology, interpret this information. Here are the six steps of a typical ITOA process : Define the data infrastructure strategy. NoSQL database. Why use a data lakehouse for causal AI? Apache Spark.

Analytics 193
article thumbnail

What Should You Know About Graph Database’s Scalability?

DZone

Having a distributed and scalable graph database system is highly sought after in many enterprise scenarios. Do Not Be Misled Designing and implementing a scalable graph database system has never been a trivial task.