Remove Big Data Remove Efficiency Remove Engineering Remove Storage
article thumbnail

What is Greenplum Database? Intro to the Big Data Database

Scalegrid

High performance, query optimization, open source and polymorphic data storage are the major Greenplum advantages. When handling large amounts of complex data, or big data, chances are that your main machine might start getting crushed by all of the data it has to process in order to produce your analytics results.

Big Data 321
article thumbnail

Data Engineers of Netflix?—?Interview with Pallavi Phadnis

The Netflix TechBlog

Data Engineers of Netflix?—?Interview Interview with Pallavi Phadnis This post is part of our “ Data Engineers of Netflix ” series, where our very own data engineers talk about their journeys to Data Engineering @ Netflix. Pallavi Phadnis is a Senior Software Engineer at Netflix.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Optimizing data warehouse storage

The Netflix TechBlog

At this scale, we can gain a significant amount of performance and cost benefits by optimizing the storage layout (records, objects, partitions) as the data lands into our warehouse. We built AutoOptimize to efficiently and transparently optimize the data and metadata storage layout while maximizing their cost and performance benefits.

Storage 203
article thumbnail

Less is More: Engineering Data Warehouse Efficiency with Minimalist Design

Uber Engineering

Maintaining Uber’s large-scale data warehouse comes with an operational cost in terms of ETL functions and storage. In our experience, optimizing for operational efficiency requires answering one key question: for which tables does the maintenance cost supersede utility?

article thumbnail

In-Stream Big Data Processing

Highly Scalable

The shortcomings and drawbacks of batch-oriented data processing were widely recognized by the Big Data community quite a long time ago. The design of the in-stream processing engine itself was driven by the following requirements: SQL-like functionality. Strict fault-tolerance is a principal requirement for the engine.

Big Data 154
article thumbnail

Conducting log analysis with an observability platform and full data context

Dynatrace

With more automated approaches to log monitoring and log analysis, however, organizations can gain visibility into their applications and infrastructure efficiently and with greater precision—even as cloud environments grow. “The weakness of a data lake is they fail when you need to access them fast,” Pawlowski said.

Analytics 193
article thumbnail

Any analysis, any time: Dynatrace Log Management and Analytics powered by Grail

Dynatrace

Several pain points have made it difficult for organizations to manage their data efficiently and create actual value. Limited data availability constrains value creation. This approach is cumbersome and challenging to operate efficiently at scale. Teams have introduced workarounds to reduce storage costs.

Analytics 240