Use Case Track

Taming large-state to join datasets for Personalization

Streaming engines like Apache Flink are redefining ETL and data processing. Data can be extracted, transformed, filtered and written out in real-time with an ease matching that of batch processing. However the real challenge of matching the prowess of batch ETL remains in doing joins, in maintaining state and to have the data be paused or rested dynamically. Netflix has a microservices architecture. Different microservices serve and record different kind of user interactions with the product. Some of these live services generate millions of events per second, all carrying meaningful but often partial information. Things start to get exciting when we want to combine the events coming from one high-traffic microservice to another. Joining these raw events generates rich datasets that are used to train the machine learning models that serve Netflix recommendations. Historically we have done this joining of large volume data-sets in batch. However we asked ourselves if the data is being generated in real-time, why must it not be processed downstream in real time? Why wait a full day to get information from an event that was generated a few mins ago? In this talk, we will share how we solved a complex join of two high-volume event streams using Flink. We will talk about maintaining large state, fault tolerance of a stateful application and strategies for failure recovery.

Authors

Shriya Arora
Netflix
Shriya Arora

Shirya works on the Data engineering team for Personalization. Which, among other things, delivers recommendations made for each user. The team is responsible for the data that goes into training and scoring of the various machine learning models that power the Netflix homepage. They have been working on moving some of our core datasets from being processed in a once-a-day daily batch ETL to being processed in near-real time using Apache Flink. Before Netflix, she was at Walmart Labs, where she helped build and architect the new generation item-setup, moving from batch processing to stream. They used Storm-Kafka to enable a micro-services architecture that can allow for products to be updated near real-time as opposed to once-a-day update on the legacy framework.

Fill out the form to view
the Slides and Video

* All fields required