This connector provides a Sink that writes partitioned files to any filesystem supported by Hadoop FileSystem. To use this connector, add the following dependency to your project:
Note that the streaming connectors are currently not part of the binary distribution. See here for information about how to package the program with the libraries for cluster execution.
The bucketing behaviour as well as the writing can be configured but we will get to that later. This is how you can create a bucketing sick which by default, sinks to rolling files that are split by time:
The only required parameter is the base path where the buckets will be stored. The sink can be further configured by specifying a custom bucketer, writer and batch size.
By default the bucketing sink will split by the current system time when elements arrive and will
use the datetime pattern "yyyy-MM-dd--HH"
to name the buckets. This pattern is passed to
SimpleDateFormat
with the current system time to form a bucket path. A new bucket will be created
whenever a new date is encountered. For example, if you have a pattern that contains minutes as the
finest granularity you will get a new bucket every minute. Each bucket is itself a directory that
contains several part files: each parallel instance of the sink will create its own part file and
when part files get too big the sink will also create a new part file next to the others. When a
bucket becomes inactive, the open part file will be flushed and closed. A bucket is regarded as
inactive when it hasn’t been written to recently. By default, the sink checks for inactive buckets
every minute, and closes any buckets which haven’t been written to for over a minute. This
behaviour can be configured with setInactiveBucketCheckInterval()
and
setInactiveBucketThreshold()
on a BucketingSink
.
You can also specify a custom bucketer by using setBucketer()
on a BucketingSink
. If desired,
the bucketer can use a property of the element or tuple to determine the bucket directory.
The default writer is StringWriter
. This will call toString()
on the incoming elements
and write them to part files, separated by newline. To specify a custom writer use setWriter()
on a BucketingSink
. If you want to write Hadoop SequenceFiles you can use the provided
SequenceFileWriter
which can also be configured to use compression.
The last configuration option is the batch size. This specifies when a part file should be closed and a new one started. (The default part file size is 384 MB).
Example:
This will create a sink that writes to bucket files that follow this schema:
/base/path/{date-time}/part-{parallel-task}-{count}
Where date-time
is the string that we get from the date/time format, parallel-task
is the index
of the parallel sink instance and count
is the running number of part files that where created
because of the batch size.
For in-depth information, please refer to the JavaDoc for BucketingSink.