Header background

What is a data lakehouse? Combining data lakes and warehouses for the best of both worlds

Today's organizations need a place to store massive amounts of structured and unstructured data. With a data lakehouse, organizations get the best of data lakes and data warehouses.

While data lakes and data warehousing architectures are commonly used modes for storing and analyzing data, a data lakehouse is an efficient third way to store and analyze data that unifies the two architectures while preserving the benefits of both.

A data lakehouse, therefore, enables organizations to get the best of both worlds.

But before your data moves into its data lakehouse, it’s important to understand what this architecture looks like in practice. Let’s explore what constitutes a data lakehouse, how it works, its pros and cons, and how it differs from data lakes and data warehouses.

What is a data lakehouse?

A data lakehouse features the flexibility and cost-efficiency of a data lake with the contextual and high-speed querying capabilities of a data warehouse.

Data warehouses offer a single storage repository for structured data and provide a source of truth for organizations. However, organizations must structure and store data inputs in a specific format to enable extract, transform, and load processes, and efficiently query this data.

Data lakes, meanwhile, are flexible environments that can store both structured and unstructured data in its raw, native form. This approach enables organizations to use this data to build artificial intelligence (AI) and machine learning models from large volumes of disparate data sets. Unlike data warehouses, however, data is not transformed before landing in storage. So, usage can become overwhelming if organizations do not carefully manage it.

A data lakehouse isn't a data lake or a data warehouse. It combines the best of both worlds into one solution.

How does a data lakehouse work?

Data lakehouses typically provide support for data ingestion through a variety of methods. These include application programming interfaces, streaming, and more. This data lands in its original, raw form without requiring schema definition. A data lakehouse provides a cost-effective storage layer for both structured and unstructured data. Therefore, it contains all of an organization’s data.

Generally, the storage technology categorizes data into landing, raw, and curated zones depending on its consumption readiness. The result is a framework that offers a single source of truth and enables companies to make the most of advanced analytics capabilities simultaneously.

In a data lakehouse model, organizations first migrate data from sources into a data lake. Then, a subset of this data seamlessly filters through to become more curated and trusted data sets on which organizations set the required governance, use, and access rules.

What are the features of a data lakehouse?

While the specifics of data lakehouses differ based on business objectives and use cases, the following five features are fundamental:

  1. Data ingestion. Data lakehouses ingest large structured and unstructured data volumes at a very high speed in their raw, native form. Agent and open technologies make it easy to ingest large volumes of observability, security, and business data.
  2. Data management. Data lakehouses take advantage of low-cost object stores like AWS S3 or Microsoft Azure Blob Storage to store and manage data cost-effectively.
  3. Query language. Data lakehouses offer a way to interrogate the data and send processing instructions in the form of queries. The performance of these queries needs to be at a level where they can support ad-hoc analytics use cases.
  4. Massively parallel processing. Data lakehouses deliver the query response with minimal latency. This is simply not possible with conventional architectures. Massively parallel processing allows distributed query execution by multiple processors to speed up the response.
  5. Support diverse analytics workloads. Data lakehouses support multiple use cases around analytics and machine learning, such as exploratory analytics to answer ad-hoc questions.

What are the advantages of a data lakehouse?

The most common advantages of data lakehouses include the following:

  • Reduced redundancy. Because data lakehouses combine the functions of lakes and warehouses, they offer an all-purpose storage platform that can handle any type of data. As a result, organizations can move away from hybrid lake-warehouse models that see data duplicated to ensure accessibility.
  • Cost control. Utilizing standardized object storage solutions, data lakehouses offer the same cost-effectiveness and control as data lakes. Add in the single-source repository model of data warehouses and enterprises can consolidate data to keep costs under control.
  • Improved governance. Lakehouse models prioritize data security and make it easier to apply governance mechanisms that enable people to work with data they need, but also adhere to regulatory compliance standards at a global scale.

Data warehouse vs. data lake vs. data lakehouse: What’s the difference?

While data lakehouses combine the flexibility and cost-efficiency of data lakes with the querying capabilities of data warehouses, it’s important to understand how these storage environments differ.

Data warehouses

Data warehouses were the original big data storage option. Designed to provide a single source of truth for structured data, they offer a way for organizations to simplify data management by centralizing inputs. While they are extremely adept at this function, they lack the flexibility of their data lake descendants.

Data lakes

Data lakes evolved as a way to better manage structured and unstructured data simultaneously. These lakes are highly flexible and allow companies to store data in native form. In turn, this empowers more in-depth data science and reduces the total cost of data management.

Data lakehouses

Data lakehouses combine the key aspects of lakes and warehouses. This enables enterprises to use the single-repository model of data warehouses for unified storage without sacrificing the analytical flexibility of data lakes, allowing data lakehouses to excel at both analytical and machine learning workloads.

How causational data lakehouses make analytics, AI, and automation possible for observability, security, and business data

Data lakehouse architecture allows organizations to unlock value from all their observability, security, and business data. It uses cost-effective and durable object storage to store massive volumes of data without the traditional cost and performance tradeoffs.

Using technologies like OneAgent and OpenTelemetry, observability, security, and business data can be ingested at high speeds into the data lakehouse in their native formats. While the data is ingested, dynamic topology mapping technology like SmartScape can preserve the relationships between entities. Preserving the context results in the ability to use a causational approach to arrive at precise answers as opposed to rudimentary machine learning-based approaches. A dynamic map of interactions and relationships between applications and the underlying infrastructure also helps to zoom in and out of an issue at different stages of analysis. Distributed tracing and code-level analysis technology like PurePath automatically integrates high-fidelity distributed tracing with user experience data and provides code-level analytics.

By applying massively parallel processing and high-performance caches, all this contextualized data can be interrogated at high speeds for ad-hoc analytics or AI-powered precise answers. These answers and insights become the basis for enabling automation whether it’s automated remediation or proactive issue identification. As a result, organizations receive context-rich observability and security data analytics in cloud-native environments. On-demand insights across structured and unstructured data types are now just a query away. This empowers enterprises to confidently navigate big data waters — regardless of depth.

The Dynatrace difference, now powered by Grail

Dynatrace offers a unified software intelligence platform that supports your mission to accelerate cloud transformation, eliminate inefficient silos, and streamline processes. By managing observability data in Grail — the Dynatrace data lakehouse with massively parallel processing — all your data is automatically stored with causational context, with no rehydration, indexes, or schemas to maintain.

With Grail, Dynatrace provides unparalleled precision in its ability to cut through the noise and empower you with precise answers and insights to focus on what is most critical. Thanks to the platform’s automation and AI, Dynatrace helps organizations tame cloud complexity, create operational efficiencies, and deliver better business outcomes.

Ready to do more with your data? Download the latest CIO Report to discover where traditional infrastructure monitoring isn’t keeping up — and what you can do about it.