Remove Analysis Remove Big Data Remove Processing Remove Tuning
article thumbnail

Migrating Critical Traffic At Scale with No Downtime?—?Part 1

The Netflix TechBlog

Migrating Critical Traffic At Scale with No Downtime — Part 1 Shyam Gala , Javier Fernandez-Ivern , Anup Rokkam Pratap , Devang Shah Hundreds of millions of customers tune into Netflix every day, expecting an uninterrupted and immersive streaming experience. This technique facilitates validation on multiple fronts.

Traffic 339
article thumbnail

What is IT automation?

Dynatrace

At its most basic, automating IT processes works by executing scripts or procedures either on a schedule or in response to particular events, such as checking a file into a code repository. Adding AIOps to automation processes makes the volume of data that applications and multicloud environments generate much less overwhelming.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

How Netflix uses eBPF flow logs at scale for network insight

The Netflix TechBlog

Berkeley Packet Filter (BPF) is an in-kernel execution engine that processes a virtual instruction set, and has been extended as eBPF for providing a safe way to extend kernel functionality. The data is also used by security and other partner teams for insight and incident analysis. What is BPF?

Network 325
article thumbnail

Conducting log analysis with an observability platform and full data context

Dynatrace

Modern organizations ingest petabytes of data daily, but legacy approaches to log analysis and management cannot accommodate this volume of data. Traditional log analysis evaluates logs and enables organizations to mitigate myriad risks and meet compliance regulations.

Analytics 181
article thumbnail

Optimizing anomaly detection and noise

Dynatrace

I took a big-data-analysis approach, which started with another problem visualization. I wanted to understand how I could tune Dynatrace’s problem detection, but to do that I needed to understand the situation first. To achieve that I took two approaches: Visualizing historic problem data via a “Swimlane Visualization”.

Tuning 261
article thumbnail

Data lakehouse innovations advance the three pillars of observability for more collaborative analytics

Dynatrace

The goal is to turn more data into insights so the whole organization can make data-driven decisions and automate processes. Grail data lakehouse delivers massively parallel processing for answers at scale Modern cloud-native computing is constantly upping the ante on data volume, variety, and velocity.

Analytics 179
article thumbnail

Formulating ‘Out of Memory Kill’ Prediction on the Netflix App as a Machine Learning Problem

The Netflix TechBlog

We at Netflix, as a streaming service running on millions of devices, have a tremendous amount of data about device capabilities/characteristics and runtime data in our big data platform. With large data, comes the opportunity to leverage the data for predictive and classification based analysis.

Big Data 179