Operations Track

Automating Flink Deployments to Kubernetes

Deploying Flink jobs while maintaining state requires a number of CLI tasks that need to be performed. As error-prone as that is when done manually, in any serious software project you'll rely on a continuous integration pipeline to automate it. Summing up a bunch of commands in a CI script will get you going. However, add a Kubernetes environment with restricted access in the mix and you got yourself an operational hell. What you really want is a tested, maintainable and repeatable process on which you can rely. In this talk we'll discuss the major pain points we had to overcome when deploying jobs to Apache Flink in a Kubernetes cluster and how we solved this by creating an open-source deployment tool.

Authors

Marc Rooding
Marc Rooding
ING
Marc Rooding

Currently working as a Senior Full Stack Engineer at ING’s Wholesale Banking Advanced Analytics. With a background in software development & consultancy, he helps organizations in validating and fleshing out new digital opportunities. His passion and craftsmanship are fed by the belief that there still is lots of room for improvement in software engineering. His field of interest span all aspects related to software architecture, software engineering, and cloud-based engineering.

Niels Dennissen
Niels Dennissen
ING
Niels Dennissen

After a bachelor in Computer Science and a master Artificial Intelligence, I started to work at ING roughly two and a half years ago. Following a one year IT traineeship, I started with the team I’m currently in, Wholesale Banking Advanced Analytics. Here I work as a Data Engineer for project Katana. Katana is aimed at aiding traders in Financial Markets which involves lots of real-time streaming systems, one of which obviously is Apache Flink! We’ve been using Flink for nearly a year now, resulting in open sourcing a deployer on Kubernetes and implementing our own state manager using Apache Avro.

Fill out the form to view
the Slides and Video

* All fields required