Introduction to data Artisans and Flink
data Artisans and Flink basics start:
- Flink is an Apache project sponsored by the Berlin-based company data Artisans.
- Flink has been viewed in a few different ways, all of which are similar to how Spark is seen. In particular, per co-founder Kostas Tzoumas:
- Flink’s original goal was “Hadoop done right”.
- Now Flink is focused on streaming analytics, as an alternative to Spark Streaming, Samza, et al.
- Kostas seems to see Flink as a batch-plus-streaming engine that’s streaming-first.
Like many open source projects, Flink seems to have been partly inspired by a Google paper.
To this point, data Artisans and Flink have less maturity and traction than Databricks and Spark. For example:
- The first line of Flink code dates back to 2010.
- data Artisans and the Flink open source project both started in 2014.
- When I met him in late June, Kostas told me that Data Artisans had raised $7 million and had 15 employees.
- Flink’s current SQL support is very minor.
Per Kostas, about half of Flink committers are at data Artisans; others are at Cloudera, Hortonworks, Confluent, Intel, at least one production user, and some universities. Kostas provided about 5 examples of production Flink users, plus a couple of very big names that were sort-of-users (one was using a forked version of Flink, while another is becoming a user “soon”).
The technical story at data Artisans/Flink revolves around the assertion “We have the right architecture for streaming.” If I understood data Artisans co-founder Stephan Ewen correctly on a later call, the two key principles in support of that seem to be:
- The key is to keep data “transport” running smoothly without interruptions, delays or bottlenecks, where the relevant sense of “transport” is movement from one operator/operation to the next.
- In this case, the Flink folks feel that modularity supports efficiency.
In particular:
- Anything that relates to consistency/recovery is kept almost entirely separate from basic processing, with minimal overhead and nothing that resembles a lock.
- Windowing and so on operate separately from basic “transport” as well.
- The core idea is that special markers — currently in the ~20 byte range in size — are injected into the streams. When the marker gets to an operator, the operator snapshots the then-current state of its part of the stream.
- Should recovery ever be needed, consistency is achieved by assembling all the snapshots corresponding to a single marker, and replaying any processing that happened after those snapshots were taken.
- Actually, this is oversimplified, in that it assumes there’s only a single input stream.
- A lot of Flink’s cleverness, I gather, is involved in assembling a consistent snapshot despite the realities of multiple input streams.
The upshot, Flink partisans believe, is to match the high throughput of Spark Streaming while also matching the low latency of Storm.
The Flink folks naturally have a rich set of opinions about streaming. Besides the points already noted, these include:
- “Exactly once” semantics are best in almost all use cases, as opposed to “at least once”, or to turning off fault tolerance altogether. (Exceptions might arise in extreme performance scenarios, or because of legacy systems’ expectations.)
- Repetitive, scheduled batch jobs are often “streaming processes in disguise”. Besides any latency benefits, reimplementing them using streaming technology might simplify certain issues that can occur around the boundaries of batch windows. (The phrase “continuous processing” could reasonably be used here.)
We discussed joins quite a bit, but this was before I realized that Flink didn’t have much SQL support. Let’s just say they sounded rather primitive even when I assumed they were done via SQL.
Our discussion of windowing was more upbeat. Flink supports windows based either on timestamps or data arrival time, and these can be combined as needed. Stephan thinks this flexibility is important.
As for Flink use cases, they’re about what you’d expect:
- Plenty of data transformation, because that’s how all these systems start out. Indeed, the earliest Flink adoption was for batch transformation.
- Plenty of stream processing.
But Flink doesn’t have all the capabilities one would want for the kinds of investigative analytics commonly done on Spark.
Related links
- My recent series of Spark posts offer comparison or background to this one.
- I surveyed Spark Streaming, Storm et al. in January.
- How you factor things is always important.
- data Artisans has a non-obvious URL.
Comments
3 Responses to “Introduction to data Artisans and Flink”
Leave a Reply
Thank you Curt for the article!
One clarification regarding Flink production users: several companies have talked publicly about their use of Flink in production, including Alibaba, King, Zalando, Bouygues Telecom, ResearchGate, and Otto group.
The Flink community maintains a larger directory that collects such use cases here: http://flink.apache.org/poweredby.html
In my view, it is very interesting to compare, as deep as possible, implementation of the shuffle in Spark and Flink.
I suggest to focus on this side of comparison because when job became less trivial and need for shuffle appears – then this process will dominate both performance and stability…
Have to be said that in streaming case shuffle get one more requirement – low latency, and it is interesting how it was solved.
[…] General streaming. Some of my posts on that subject are linked at the bottom of my August post on Flink. […]