logo
Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
stdClass Object
(
[id] => d71bc
[title] => Senior SQL Engine Developer
[full_title] => Senior SQL Engine Developer - Berlin
[shortcode] => 932F12CDD0
[code] => 
[state] => published
[department] => 
[url] => https://data-artisans.workable.com/jobs/879850
[application_url] => https://data-artisans.workable.com/jobs/879850/candidates/new
[shortlink] => https://data-artisans.workable.com/j/932F12CDD0
[location] => stdClass Object
(
[country] => Germany
[country_code] => DE
[region] => Berlin
[region_code] => BE
[city] => Berlin
[zip_code] => 10963
[telecommuting] => 
)
[created_at] => 2018-11-28T10:03:31Z
[full_description] => 

Data stream processing is redefining what’s possible in the world of data-driven applications and services. Apache Flink is one of the systems at the forefront of this development, pushing the boundaries of what can be achieved with data stream processing.

Apache Flink currently powers some of the largest data stream processing pipelines in the world, with users such as Alibaba, Uber, ING, Netflix, and more running Flink in production. Flink is also one of the most active and fastest-growing open source projects in the Apache Software Foundation.

One of Flink’s most popular APIs is SQL. Unlike many other systems, Flink features SQL as a unified API for batch and stream processing, meaning that a query computes the same results regardless whether it is executed on static data sets or data streams. Flink’s SQL support is the foundation for company-internal as well as publicly available data analytics services of enterprises, such as Alibaba, Huawei, and Uber.

data Artisans was founded in 2014 by the original creators of the Apache Flink project, and we’re building the next-generation platform for real-time data applications. We are tackling some of today’s biggest challenges in big data and data streaming.

Requirements

Your role:

  • In this role, you will be working on “Streaming SQL”, one of the hottest topics in stream processing as a member of the team at data Artisans that develops Flink’s relational APIs.
  • Flink’s SQL support receives a lot of attention, both from users as well as contributors. You will be working closely with the Apache Flink community to extend the support for ANSI SQL features, tune the performance, both on the level of the query optimizer as well as the query runtime, and implement connectors to ingest from and emit data to external storage systems.
  • When you are not coding or discussing feature designs, you’ll have plenty of opportunity to help evangelizing Flink’s SQL API by writing blog posts or speaking at meetups and conferences around the world
  • Please note: In this role, you will design and implement a system that optimizes and executes SQL queries. Think of it as building (not using!) a database system like Oracle or SQL Server yourself.


What you’ll do all day:

  • Design and implement new features for Flink’s SQL API
  • Tune the performance of SQL queries by tweaking the query optimizer, improving generated code, and removing bottlenecks.
  • Implement connectors for external stream sources and storage systems.
  • Work with external contributors, discuss their designs, and review their code.
  • Write blog posts and present Flink at high-impact conferences around the world.
  • Become an Apache Flink and stream processing expert

You will love this job if you …

... are familiar with the design of distributed data processing systems (e.g. Hadoop, Kafka, Flink, Spark)

… you know how to design and implement a relational database or query processor.

… you have good command of Java and/or Scala and of course SQL.

… you like working together with an awesome open source community to tackle challenging problems.

… have great English skills and like to get in touch with users from around the world

… have at least Master’s level degree in Computer Science, mathematics, engineering or similar field

Benefits

  • Competitive salary
  • Tech gear of your choice
  • Free public transportation
  • International team environment (10 nationalities so far)
  • Flexible working arrangements (home office, flexible working hours)
  • Unlimited vacation policy, so take time off when you need it
  • Snacks, coffee and beverages in the office
  • Relocation assistance if needed
  • Great team of gifted and extraordinary software engineers
  • Hackathons and weekly technical Lunch Talks to keep your head full of inspirations and ideas!


Please be informed that by applying for the job offer you hereby agree that data Artisans would use your personal data in the recruitment process. The legal basis for processing your application data is Article 6 par. 1 lit. b) GDPR. Your rights in respect of data protection can be found in Chapter 3 of GDPR, and you have the right to contact a supervisory authority. Further, you may contact our data protection officer via dataprotection@data-artisans.com

[description] =>

Data stream processing is redefining what’s possible in the world of data-driven applications and services. Apache Flink is one of the systems at the forefront of this development, pushing the boundaries of what can be achieved with data stream processing.

Apache Flink currently powers some of the largest data stream processing pipelines in the world, with users such as Alibaba, Uber, ING, Netflix, and more running Flink in production. Flink is also one of the most active and fastest-growing open source projects in the Apache Software Foundation.

One of Flink’s most popular APIs is SQL. Unlike many other systems, Flink features SQL as a unified API for batch and stream processing, meaning that a query computes the same results regardless whether it is executed on static data sets or data streams. Flink’s SQL support is the foundation for company-internal as well as publicly available data analytics services of enterprises, such as Alibaba, Huawei, and Uber.

data Artisans was founded in 2014 by the original creators of the Apache Flink project, and we’re building the next-generation platform for real-time data applications. We are tackling some of today’s biggest challenges in big data and data streaming.

[requirements] =>

Your role:

  • In this role, you will be working on “Streaming SQL”, one of the hottest topics in stream processing as a member of the team at data Artisans that develops Flink’s relational APIs.
  • Flink’s SQL support receives a lot of attention, both from users as well as contributors. You will be working closely with the Apache Flink community to extend the support for ANSI SQL features, tune the performance, both on the level of the query optimizer as well as the query runtime, and implement connectors to ingest from and emit data to external storage systems.
  • When you are not coding or discussing feature designs, you’ll have plenty of opportunity to help evangelizing Flink’s SQL API by writing blog posts or speaking at meetups and conferences around the world
  • Please note: In this role, you will design and implement a system that optimizes and executes SQL queries. Think of it as building (not using!) a database system like Oracle or SQL Server yourself.


What you’ll do all day:

  • Design and implement new features for Flink’s SQL API
  • Tune the performance of SQL queries by tweaking the query optimizer, improving generated code, and removing bottlenecks.
  • Implement connectors for external stream sources and storage systems.
  • Work with external contributors, discuss their designs, and review their code.
  • Write blog posts and present Flink at high-impact conferences around the world.
  • Become an Apache Flink and stream processing expert

You will love this job if you …

... are familiar with the design of distributed data processing systems (e.g. Hadoop, Kafka, Flink, Spark)

… you know how to design and implement a relational database or query processor.

… you have good command of Java and/or Scala and of course SQL.

… you like working together with an awesome open source community to tackle challenging problems.

… have great English skills and like to get in touch with users from around the world

… have at least Master’s level degree in Computer Science, mathematics, engineering or similar field

[benefits] =>
  • Competitive salary
  • Tech gear of your choice
  • Free public transportation
  • International team environment (10 nationalities so far)
  • Flexible working arrangements (home office, flexible working hours)
  • Unlimited vacation policy, so take time off when you need it
  • Snacks, coffee and beverages in the office
  • Relocation assistance if needed
  • Great team of gifted and extraordinary software engineers
  • Hackathons and weekly technical Lunch Talks to keep your head full of inspirations and ideas!


Please be informed that by applying for the job offer you hereby agree that data Artisans would use your personal data in the recruitment process. The legal basis for processing your application data is Article 6 par. 1 lit. b) GDPR. Your rights in respect of data protection can be found in Chapter 3 of GDPR, and you have the right to contact a supervisory authority. Further, you may contact our data protection officer via dataprotection@data-artisans.com

[employment_type] => Full-time [industry] => Computer Software [function] => Engineering [experience] => [education] => [keywords] => Array ( [0] => Apache Flink [1] => Big Data [2] => Data streaming [3] => Distributed Systems [4] => SQL [5] => databases [6] => presto ) )

Senior SQL Engine Developer

Berlin, Germany

Do you have a LinkedIn account? Import your resume and save time! Apply with linkedin

  • Text Hover
  • Text Hover
  • Text Hover