In this guide we will start from scratch and go from setting up a Flink project to running a streaming analysis program on a Flink cluster.
Wikipedia provides an IRC channel where all edits to the wiki are logged. We are going to read this channel in Flink and count the number of bytes that each user edits within a given window of time. This is easy enough to implement in a few minutes using Flink, but it will give you a good foundation from which to start building more complex analysis programs on your own.
We are going to use a Flink Maven Archetype for creating our project stucture. Please see Java API Quickstart for more details about this. For our purposes, the command to run is this:
You can edit the groupId
, artifactId
and package
if you like. With the above parameters,
Maven will create a project structure that looks like this:
There is our pom.xml
file that already has the Flink dependencies added in the root directory and
several example Flink programs in src/main/java
. We can delete the example programs, since
we are going to start from scratch:
As a last step we need to add the Flink Wikipedia connector as a dependency so that we can
use it in our program. Edit the dependencies
section so that it looks like this:
Notice the flink-connector-wikiedits_2.10
dependency that was added. (This example and
the Wikipedia connector were inspired by the Hello Samza example of Apache Samza.)
It’s coding time. Fire up your favorite IDE and import the Maven project or open a text editor and
create the file src/main/java/wikiedits/WikipediaAnalysis.java
:
The program is very basic now, but we will fill it in as we go. Note that I’ll not give import statements here since IDEs can add them automatically. At the end of this section I’ll show the complete code with import statements if you simply want to skip ahead and enter that in your editor.
The first step in a Flink program is to create a StreamExecutionEnvironment
(or ExecutionEnvironment
if you are writing a batch job). This can be used to set execution
parameters and create sources for reading from external systems. So let’s go ahead and add
this to the main method:
Next we will create a source that reads from the Wikipedia IRC log:
This creates a DataStream
of WikipediaEditEvent
elements that we can further process. For
the purposes of this example we are interested in determining the number of added or removed
bytes that each user causes in a certain time window, let’s say five seconds. For this we first
have to specify that we want to key the stream on the user name, that is to say that operations
on this stream should take the user name into account. In our case the summation of edited bytes in the windows
should be per unique user. For keying a Stream we have to provide a KeySelector
, like this:
This gives us a Stream of WikipediaEditEvent
that has a String
key, the user name.
We can now specify that we want to have windows imposed on this stream and compute a
result based on elements in these windows. A window specifies a slice of a Stream
on which to perform a computation. Windows are required when computing aggregations
on an infinite stream of elements. In our example we will say
that we want to aggregate the sum of edited bytes for every five seconds:
The first call, .timeWindow()
, specifies that we want to have tumbling (non-overlapping) windows
of five seconds. The second call specifies a Fold transformation on each window slice for
each unique key. In our case we start from an initial value of ("", 0L)
and add to it the byte
difference of every edit in that time window for a user. The resulting Stream now contains
a Tuple2<String, Long>
for every user which gets emitted every five seconds.
The only thing left to do is print the stream to the console and start execution:
That last call is necessary to start the actual Flink job. All operations, such as creating
sources, transformations and sinks only build up a graph of internal operations. Only when
execute()
is called is this graph of operations thrown on a cluster or executed on your local
machine.
The complete code so far is this:
You can run this example in your IDE or on the command line, using Maven:
The first command builds our project and the second executes our main class. The output should be similar to this:
The number in front of each line tells you on which parallel instance of the print sink the output was produced.
This should get you started with writing your own Flink programs. To learn more you can check out our guides about basic concepts and the DataStream API. Stick around for the bonus exercise if you want to learn about setting up a Flink cluster on your own machine and writing results to Kafka.
Please follow our setup quickstart for setting up a Flink distribution on your machine and refer to the Kafka quickstart for setting up a Kafka installation before we proceed.
As a first step, we have to add the Flink Kafka connector as a dependency so that we can
use the Kafka sink. Add this to the pom.xml
file in the dependencies section:
Next, we need to modify our program. We’ll remove the print()
sink and instead use a
Kafka sink. The new code looks like this:
The related classes also need to be imported:
Note how we first transform the Stream of Tuple2<String, Long>
to a Stream of String
using
a MapFunction. We are doing this because it is easier to write plain strings to Kafka. Then,
we create a Kafka sink. You might have to adapt the hostname and port to your setup. "wiki-result"
is the name of the Kafka stream that we are going to create next, before running our program.
Build the project using Maven because we need the jar file for running on the cluster:
The resulting jar file will be in the target
subfolder: target/wiki-edits-0.1.jar
. We’ll use
this later.
Now we are ready to launch a Flink cluster and run the program that writes to Kafka on it. Go to the location where you installed Flink and start a local cluster:
We also have to create the Kafka Topic, so that our program can write to it:
Now we are ready to run our jar file on the local Flink cluster:
The output of that command should look similar to this, if everything went according to plan:
03/08/2016 15:09:27 Job execution switched to status RUNNING.
03/08/2016 15:09:27 Source: Custom Source(1/1) switched to SCHEDULED
03/08/2016 15:09:27 Source: Custom Source(1/1) switched to DEPLOYING
03/08/2016 15:09:27 TriggerWindow(TumblingProcessingTimeWindows(5000), FoldingStateDescriptor{name=window-contents, defaultValue=(,0), serializer=null}, ProcessingTimeTrigger(), WindowedStream.fold(WindowedStream.java:207)) -> Map -> Sink: Unnamed(1/1) switched to SCHEDULED
03/08/2016 15:09:27 TriggerWindow(TumblingProcessingTimeWindows(5000), FoldingStateDescriptor{name=window-contents, defaultValue=(,0), serializer=null}, ProcessingTimeTrigger(), WindowedStream.fold(WindowedStream.java:207)) -> Map -> Sink: Unnamed(1/1) switched to DEPLOYING
03/08/2016 15:09:27 TriggerWindow(TumblingProcessingTimeWindows(5000), FoldingStateDescriptor{name=window-contents, defaultValue=(,0), serializer=null}, ProcessingTimeTrigger(), WindowedStream.fold(WindowedStream.java:207)) -> Map -> Sink: Unnamed(1/1) switched to RUNNING
03/08/2016 15:09:27 Source: Custom Source(1/1) switched to RUNNING
You can see how the individual operators start running. There are only two, because the operations after the window get folded into one operation for performance reasons. In Flink we call this chaining.
You can observe the output of the program by inspecting the Kafka topic using the Kafka console consumer:
You can also check out the Flink dashboard which should be running at http://localhost:8081. You get an overview of your cluster resources and running jobs:
If you click on your running job you will get a view where you can inspect individual operations and, for example, see the number of processed elements:
This concludes our little tour of Flink. If you have any questions, please don’t hesitate to ask on our Mailing Lists.