Company Updates, Flink Community

data Artisans at Berlin Buzzwords 2017

twitterredditlinkedinmailtwitterredditlinkedinmail
On June 12-13, 2017, the open source data community will meet at Berlin Buzzwords. This year, data Artisans will take part in the conference in a number of ways.

For the first time, we will have a booth at the conference (at Palais), and will be available to discuss Apache Flink® and the dA Platform. Stop by to connect with Apache Flink experts and learn more about implementing enterprise-grade streaming data applications in production.

And there will be even more from data Artisans, such as conference talks, book signings, AMA sessions and off-site events. Don’t miss out!

BOOTH
Monday, June 12, 2017 8:30am to 6:00pm
Tuesday, June 13, 2017 9:00pm to 5:30pm
Meet Patrick Lucas, Senior Data Engineer, Mike Winters, Product Marketing Manager, Daniela Bentrup, Event Manager and the data Artisans engineering team at the data Artisans booth at Palais to learn more about Apache Flink and the dA platform and to collect your free Flink books and T-shirts.

OFFSITE EVENT Wednesday, June 14, 2017 – 7pm-10pm | Location: idealo Internet GmbH
The 2nd Apache Flink Meetup in 2017 will take place at idealo. Join us for an evening of pizza, beer, and the latest and greatest on Apache Flink® with our host company idealo. data Artisans software engineer and Flink committer Tzu-Li (Gordon) Tai will be among the speakers talking about Apache Flink.

BOOK SIGNING Monday, June 12, 2017 – 4:00pm (updated time) | Location: Palais at Kulturbrauerei, data Artisans booth
A book signing will be held in the data Artisans booth with our CEO and co-founder Kostas Tzoumas. This is a great opportunity for you to meet a co-creator of Apache Flink and one of the authors of “Introduction to Apache Flink” and also to get a free signed copy of the book.

AMA SESSION Tuesday, June 13, 2017 – 4:00pm (updated time) | Location: Palais at Kulturbrauerei, data Artisans booth
Come to our AMA session and meet data Artisans’ CEO Kostas Tzoumas and Software Engineers Fabian Hueske and Kostas Kloudas. Join the original creators of Apache Flink for an open Q/A session and talk to the data Artisans team about your use-cases and experiences with Flink, or just come to listen in.

Monday, June 12, 2017 – 3:20pm-4pm | Location: Kesselhaus data Artisans’ CTO Stephan Ewen will present the session “Experiences running Flink at Very Large Scale”. This talk shares insights from deploying and tuning Flink steam processing applications at very large scale. We share lessons learned from users, contributors, and our own experiments about running demanding streaming jobs at scale. The talk will explain what aspects currently render a job as particularly demanding, show how to configure and tune a large scale Flink job, and outline what the Flink community is working on to make the out-of-the-box experience as smooth as possible. We will, for example, dive into – analyzing and tuning checkpointing – selecting and configuring state backends – understanding common bottlenecks – understanding and configuring network parameters.

Tuesday, June 13, 2017 – 12:20pm-1pm | Location: Maschinenhaus data Artisans’ Software Engineer Fabian Hueske will present the session “Stream Analytics with SQL on Apache Flink”. SQL is undoubtedly the most widely used language for data analytics. It is declarative, many database systems and query processors feature advanced query optimizers and highly efficient execution engines, and last but not least, it is the standard that everybody knows and uses. With stream processing technology becoming mainstream a question arises: “Why isn’t SQL widely supported by open source stream processors?”. One answer is that SQL’s semantics and syntax have not been designed with the characteristics of streaming data in mind. Consequently, systems that want to provide support for SQL on data streams have to overcome a conceptual gap.

Apache Flink is a distributed stream processing system. Due to its support for event-time processing, exactly-once state semantics, and its high throughput capabilities, Flink is very well suited for streaming analytics. For about a year, the Flink community has been working on two relational APIs for unified stream and batch processing, the Table API and SQL. The Table API is a language-integrated relational API and the SQL interface is compliant with standard SQL. Both APIs are semantically compatible and share the same optimization and execution paths based on Apache Calcite. A core principle of both APIs is to provide the same semantics for batch and streaming data sources, meaning that a query should compute the same result regardless whether it was executed on a static data set, such as a file, or on a data stream, such as a Kafka topic.

In this talk we present the semantics of Apache Flink’s relational APIs for stream analytics. We discuss their conceptual model and showcase their usage. The central concept of these APIs are dynamic tables. We explain how streams are converted into dynamic tables and vice versa without losing information due to the stream-table duality. Relational queries on dynamic tables behave similar to materialized view definitions and produce new dynamic tables. We show how dynamic tables are converted back into changelog streams or are written as materialized views to external systems, such as Apache Kafka or Apache Cassandra, and are updated in place with low latency.

Tuesday, June 13, 2017 – 2:30pm-3:10pm | Location: Maschinenhaus data Artisans’ Software Engineer Kostas Kloudas will present the session “Complex Event Processing with Flink: the state of FlinkCEP”. Pattern matching over event streams is increasingly being used in many areas including financial services and click stream analysis. Flink, as a true stream processing engine, emerges as a natural candidate for these use cases. In this talk, we will present FlinkCEP, a library for Complex Event Processing (CEP) based on Flink. At the conceptual level, we will see the different patterns the library can support, we will present the main building blocks we implemented to support them, and we will discuss possible future additions that will further enhance the coverage of the library. At the practical level, we will show how the integration of FlinkCEP with Flink allows the former to take advantage of Flink’s rich ecosystem (e.g. connectors) and its stream processing capabilities, such as support for event-time processing, exactly-once state semantics, fault-tolerance, savepoints and high throughput.

Get in touch with our team Reach out to us before or after the show at info@data-artisans.com to schedule meetings or ask any follow up questions you may have. We hope to see you at the conference! Follow us on Twitter @dataArtisans for on-the-fly updates and giveaways during Berlin Buzzwords, and follow us on the hashtag #dataArtisans.
twitterredditlinkedinmailtwitterredditlinkedinmail