article thumbnail

Building Netflix’s Distributed Tracing Infrastructure

The Netflix TechBlog

Our tactical approach was to use Netflix-specific libraries for collecting traces from Java-based streaming services until open source tracer libraries matured. We chose Open-Zipkin because it had better integrations with our Spring Boot based Java runtime environment. Stream Processing: to sample or not to sample trace data?

article thumbnail

Evolution of Netflix Conductor:

The Netflix TechBlog

This addition also provides the option to use the Elasticsearch RestClient instead of the Transport Client which was enforced in the previous version. However, there is no official Python client in Pypi, and lacks some of the newer additions to the Java client.

Lambda 189
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Scale Your Liferay Application by Clustering

Enprowess

And whenever one server isn’t sufficient to serve the high traffic needs of your portal, you can scale your Liferay portal by adding additional servers. It is mainly required for parallel processing, fault tolerance and load balancing, high traffic on the application. All you just need is to modify one Ehcache configuration file.

Cache 52
article thumbnail

Data Movement in Netflix Studio via Data Mesh

The Netflix TechBlog

CDC events can also be sent to Data Mesh via a Java Client Producer Library. However, it is paramount that we validate the complete set of identifiers such as a list of movie ids across producers and consumers for higher overall confidence in the data transport layer of choice.

Big Data 253
article thumbnail

DBLog: A Generic Change-Data-Capture Framework

The Netflix TechBlog

Nonetheless, we found a number of limitations that could not satisfy our requirements e.g. stalling the processing of log events until a dump is complete, missing ability to trigger dumps on demand, or implementations that block write traffic by using table locks. Blocking write traffic by locking tables. Writing events to any output.

Database 197
article thumbnail

DBLog: A Generic Change-Data-Capture Framework

The Netflix TechBlog

Nonetheless, we found a number of limitations that could not satisfy our requirements e.g. stalling the processing of log events until a dump is complete, missing ability to trigger dumps on demand, or implementations that block write traffic by using table locks. Blocking write traffic by locking tables. Writing events to any output.

Database 212