Unit 5 (SP)
Unit 5 (SP)
Unit 5 (SP)
STREAMING
Syllabus: Structured Streaming, Basic Concepts, Handling Event-time and Late Data, Fault-
tolerant Semantics, Exactly-once Semantics, Creating Streaming Datasets, Schema Inference,
Partitioning of Streaming datasets, Operations on Streaming Data, Selection, Aggregation,
Projection, Watermarking, Window operations, Types of Time windows, Join Operations,
Deduplication
1.1.Structured Streaming-Overview
Structured Streaming is a scalable and fault-tolerant stream processing engine built on
the Spark SQL engine. You can express your streaming computation the same way you
would express a batch computation on static data. The Spark SQL engine will take care
of running it incrementally and continuously and updating the final result as streaming
data continues to arrive. You can use the Dataset/DataFrame API in Scala, Java, Python
or R to express streaming aggregations, event-time windows, stream-to-batch joins, etc.
The computation is executed on the same optimized Spark SQL engine. Finally, the
system ensures end-to-end exactly-once fault-tolerance guarantees through
checkpointing and Write-Ahead Logs. In short, Structured Streaming provides fast,
scalable, fault-tolerant, end-to-end exactly-once stream processing without the user
having to reason about streaming.
In this guide, we are going to walk you through the programming model and the APIs. We
are going to explain the concepts mostly using the default micro-batch processing
model, and then later discuss Continuous Processing model. First, let’s start with a
simple example of a Structured Streaming query - a streaming word count.
Quick Example
Let’s say you want to maintain a running word count of text data received from a data
server listening on a TCP socket. Let’s see how you can express this using Structured
Streaming. You can see the full code in Scala/Java/Python/R. And if you download
Spark, you can directly run the example. In any case, let’s walk through the example
step-by-step and understand how it works. First, we have to import the necessary
classes and create a local SparkSession, the starting point of all functionalities related
to Spark.
• Python
• Scala
• Java
• R
spark = SparkSession \
.builder \
.appName("StructuredNetworkWordCount") \
.getOrCreate()
Next, let’s create a streaming DataFrame that represents text data received from a
server listening on localhost:9999, and transform the DataFrame to calculate word
counts.
• Python
• Scala
• Java
• R
lines = spark \
.readStream \
.format("socket") \
.option("host", "localhost") \
.option("port", 9999) \
.load()
words = lines.select(
explode(
).alias("word")
wordCounts = words.groupBy("word").count()
This lines DataFrame represents an unbounded table containing the streaming text
data. This table contains one column of strings named “value”, and each line in the
streaming text data becomes a row in the table. Note, that this is not currently receiving
any data as we are just setting up the transformation, and have not yet started it. Next,
we have used two built-in SQL functions - split and explode, to split each line into
multiple rows with a word each. In addition, we use the function alias to name the new
column as “word”. Finally, we have defined the wordCounts DataFrame by grouping by
the unique values in the Dataset and counting them. Note that this is a streaming
DataFrame which represents the running word counts of the stream.
We have now set up the query on the streaming data. All that is left is to actually start
receiving data and computing the counts. To do this, we set it up to print the complete
set of counts (specified by outputMode("complete")) to the console every time they are
updated. And then start the streaming computation using start().
• Python
• Scala
• Java
• R
# Start running the query that prints the running counts to the console
query = wordCounts \
.writeStream \
.outputMode("complete") \
.format("console") \
.start()
query.awaitTermination()
After this code is executed, the streaming computation will have started in the
background. The query object is a handle to that active streaming query, and we have
decided to wait for the termination of the query using awaitTermination() to prevent the
process from exiting while the query is active.
To actually execute this example code, you can either compile the code in your
own Spark application, or simply run the example once you have downloaded Spark.
We are showing the latter. You will first need to run Netcat (a small utility found in most
Unix-like systems) as a data server by using
$ nc -lk 9999
• Python
• Scala
• Java
• R
$ ./bin/spark-submit
examples/src/main/python/sql/streaming/structured_network_wordcount.py localhost
9999
Then, any lines typed in the terminal running the netcat server will be counted and
printed on screen every second. It will look something like the following.
# • Python
TERMIN • Scala
AL 1:
• Java
#
Running • R
Netcat # TERMINAL 2: RUNNING structured_network_wordcount.py
$ nc -lk $ ./bin/spark-submit
9999 examples/src/main/python/sql/streaming/structured_network_wor
dcount.py localhost 9999
apache
spark
apache -------------------------------------------
hadoop
Batch: 0
-------------------------------------------
+------+-----+
| value|count|
+------+-----+
|apache| 1|
| spark| 1|
+------+-----+
-------------------------------------------
Batch: 1
-------------------------------------------
+------+-----+
| value|count|
+------+-----+
|apache| 2|
| spark| 1|
|hadoop| 1|
+------+-----+
...
...
Programming Model
The key idea in Structured Streaming is to treat a live data stream as a table that is being
continuously appended. This leads to a new stream processing model that is very
similar to a batch processing model. You will express your streaming computation as
standard batch-like query as on a static table, and Spark runs it as an incremental query
on the unbounded input table. Let’s understand this model in more detail.
1.2.Basic Concepts
Consider the input data stream as the “Input Table”. Every data item that is arriving on
the stream is like a new row being appended to the Input Table.
A query on the input will generate the “Result Table”. Every trigger interval (say, every 1
second), new rows get appended to the Input Table, which eventually updates the
Result Table. Whenever the result table gets updated, we would want to write the
changed result rows to an external sink.
The “Output” is defined as what gets written out to the external storage. The output can
be defined in a different mode:
• Complete Mode - The entire updated Result Table will be written to the external
storage. It is up to the storage connector to decide how to handle writing of the
entire table.
• Append Mode - Only the new rows appended in the Result Table since the last
trigger will be written to the external storage. This is applicable only on the
queries where existing rows in the Result Table are not expected to change.
• Update Mode - Only the rows that were updated in the Result Table since the last
trigger will be written to the external storage (available since Spark 2.1.1). Note
that this is different from the Complete Mode in that this mode only outputs the
rows that have changed since the last trigger. If the query doesn’t contain
aggregations, it will be equivalent to Append mode.
Note that each mode is applicable on certain types of queries. This is discussed in
detail later.
To illustrate the use of this model, let’s understand the model in context of the Quick
Example above. The first lines DataFrame is the input table, and the
final wordCounts DataFrame is the result table. Note that the query on
streaming lines DataFrame to generate wordCounts is exactly the same as it would be a
static DataFrame. However, when this query is started, Spark will continuously check
for new data from the socket connection. If there is new data, Spark will run an
“incremental” query that combines the previous running counts with the new data to
compute updated counts, as shown below.
Note that Structured Streaming does not materialize the entire table. It reads the
latest available data from the streaming data source, processes it incrementally to
update the result, and then discards the source data. It only keeps around the minimal
intermediate state data as required to update the result (e.g. intermediate counts in the
earlier example).
This model is significantly different from many other stream processing engines. Many
streaming systems require the user to maintain running aggregations themselves, thus
having to reason about fault-tolerance, and data consistency (at-least-once, or at-most-
once, or exactly-once). In this model, Spark is responsible for updating the Result Table
when there is new data, thus relieving the users from reasoning about it. As an example,
let’s see how this model handles event-time based processing and late arriving data.
Event-time is the time embedded in the data itself. For many applications, you may
want to operate on this event-time. For example, if you want to get the number of events
generated by IoT devices every minute, then you probably want to use the time when the
data was generated (that is, event-time in the data), rather than the time Spark receives
them. This event-time is very naturally expressed in this model – each event from the
devices is a row in the table, and event-time is a column value in the row. This allows
window-based aggregations (e.g. number of events every minute) to be just a special
type of grouping and aggregation on the event-time column – each time window is a
group and each row can belong to multiple windows/groups. Therefore, such event-
time-window-based aggregation queries can be defined consistently on both a static
dataset (e.g. from collected device events logs) as well as on a data stream, making the
life of the user much easier.
Furthermore, this model naturally handles data that has arrived later than expected
based on its event-time. Since Spark is updating the Result Table, it has full control over
updating old aggregates when there is late data, as well as cleaning up old aggregates to
limit the size of intermediate state data. Since Spark 2.1, we have support for
watermarking which allows the user to specify the threshold of late data, and allows the
engine to accordingly clean up old state. These are explained later in more detail in
the Window Operations section.
Since Spark 2.0, DataFrames and Datasets can represent static, bounded data, as well
as streaming, unbounded data. Similar to static Datasets/DataFrames, you can use the
common entry point SparkSession (Scala/Java/Python/R docs) to create streaming
DataFrames/Datasets from streaming sources, and apply the same operations on them
as static DataFrames/Datasets. If you are not familiar with Datasets/DataFrames, you
are strongly advised to familiarize yourself with them using the DataFrame/Dataset
Programming Guide.
Input Sources
There are a few built-in sources.
• File source - Reads files written in a directory as a stream of data. Files will be
processed in the order of file modification time. If latestFirst is set, order will be
reversed. Supported file formats are text, CSV, JSON, ORC, Parquet. See the
docs of the DataStreamReader interface for a more up-to-date list, and
supported options for each file format. Note that the files must be atomically
placed in the given directory, which in most file systems, can be achieved by file
move operations.
• Kafka source - Reads data from Kafka. It’s compatible with Kafka broker
versions 0.10.0 or higher. See the Kafka Integration Guide for more details.
• Socket source (for testing) - Reads UTF8 text data from a socket connection.
The listening server socket is at the driver. Note that this should be used only for
testing as this does not provide end-to-end fault-tolerance guarantees.
• Rate source (for testing) - Generates data at the specified number of rows per
second, each output row contains a timestamp and value. Where timestamp is
a Timestamp type containing the time of message dispatch, and value is
of Long type containing the message count, starting from 0 as the first row. This
source is intended for testing and benchmarking.
• Rate Per Micro-Batch source (for testing) - Generates data at the specified
number of rows per micro-batch, each output row contains
a timestamp and value. Where timestamp is a Timestamp type containing the
time of message dispatch, and value is of Long type containing the message
count, starting from 0 as the first row. Unlike rate data source, this data source
provides a consistent set of input rows per micro-batch regardless of query
execution (configuration of trigger, query being lagging, etc.), say, batch 0 will
produce 0~999 and batch 1 will produce 1000~1999, and so on. Same applies to
the generated time. This source is intended for testing and benchmarking.
Some sources are not fault-tolerant because they do not guarantee that data can be
replayed using checkpointed offsets after a failure. See the earlier section on fault-
tolerance semantics. Here are the details of all the sources in Spark.
Source Options
path: path to the input directory, and common to all file formats.
File source maxFilesPerTrigger: maximum number of new files to be considered in
every trigger (default: no max)
latestFirst: whether to process the latest new files first, useful when
Source Options
each operation in your file system before enabling this option. On the
other hand, enabling this option will reduce the cost to list source files
which can be an expensive operation.
Number of threads used in completed file cleaner can be configured
with spark.sql.streaming.fileSource.cleaner.numThreads (default: 1).
NOTE 2: The source path should not be used from multiple sources or
queries when enabling this option. Similarly, you must ensure the source
path doesn't match to any files in output directory of file stream sink.
NOTE 3: Both delete and move actions are best effort. Failing to delete or
move files will not fail the streaming query. Spark may not clean up some
source files in some circumstances - e.g. the application doesn't shut
down gracefully, too many files are queued to clean up.
rampUpTime (e.g. 5s, default: 0s): How long to ramp up before the
generating speed becomes rowsPerSecond. Using finer granularities
Rate Source than seconds will be truncated to integer seconds.
The source will try its best to reach rowsPerSecond, but the query may
Source Options
Rate Per Micro-Batch numPartitions (e.g. 10, default: Spark's default parallelism): The partition
Source (format: rate- number for the generated rows.
micro-batch)
startTimestamp (e.g. 1000, default: 0): starting value of generated time.
• Python
• Scala
• Java
• R
socketDF = spark \
.readStream \
.format("socket") \
.option("host", "localhost") \
.option("port", 9999) \
.load()
socketDF.isStreaming() # Returns True for DataFrames that have streaming sources
socketDF.printSchema()
csvDF = spark \
.readStream \
.option("sep", ";") \
.schema(userSchema) \
These examples generate streaming DataFrames that are untyped, meaning that the
schema of the DataFrame is not checked at compile time, only checked at runtime
when the query is submitted. Some operations like map, flatMap, etc. need the type to
be known at compile time. To do those, you can convert these untyped streaming
DataFrames to typed streaming Datasets using the same methods as static DataFrame.
See the SQL Programming Guide for more details. Additionally, more details on the
supported streaming sources are discussed later in the document.
Since Spark 3.1, you can also create streaming DataFrames from tables
with DataStreamReader.table(). See Streaming Table APIs for more details.
Partition discovery does occur when subdirectories that are named /key=value/ are
present and listing will automatically recurse into these directories. If these columns
appear in the user-provided schema, they will be filled in by Spark based on the path of
the file being read. The directories that make up the partitioning scheme must be
present when the query starts and must remain static. For example, it is okay to
add /data/year=2016/ when /data/year=2015/ was present, but it is invalid to change the
partitioning column (i.e. by creating the directory /data/date=2016-04-17/).
1.6.Operations on streaming DataFrames/Datasets
You can apply all kinds of operations on streaming DataFrames/Datasets – ranging from
untyped, SQL-like operations (e.g. select, where, groupBy), to typed RDD-like operations
(e.g. map, filter, flatMap). See the SQL programming guide for more details. Let’s take a
look at a few example operations that you can use.
• Python
• Scala
• Java
• R
df = ... # streaming DataFrame with IOT device data with schema { device: string,
deviceType: string, signal: double, time: DateType }
df.groupBy("deviceType").count()
You can also register a streaming DataFrame/Dataset as a temporary view and then
apply SQL commands on it.
• Python
• Scala
• Java
• R
df.createOrReplaceTempView("updates")
Note, you can identify whether a DataFrame/Dataset has streaming data or not by
using df.isStreaming.
• Python
• Scala
• Java
• R
df.isStreaming()
You may want to check the query plan of the query, as Spark could inject stateful
operations during interpret of SQL statement against streaming dataset. Once stateful
operations are injected in the query plan, you may need to check your query with
considerations in stateful operations. (e.g. output mode, watermark, state store size
maintenance, etc.)
Imagine our quick example is modified and the stream now contains lines along with
the time when the line was generated. Instead of running word counts, we want to count
words within 10 minute windows, updating every 5 minutes. That is, word counts in
words received between 10 minute windows 12:00 - 12:10, 12:05 - 12:15, 12:10 - 12:20,
etc. Note that 12:00 - 12:10 means data that arrived after 12:00 but before 12:10. Now,
consider a word that was received at 12:07. This word should increment the counts
corresponding to two windows 12:00 - 12:10 and 12:05 - 12:15. So the counts will be
indexed by both, the grouping key (i.e. the word) and the window (can be calculated
from the event-time).
• Python
• Scala
• Java
• R
# Group the data by window and word and compute the count of each group
windowedCounts = words.groupBy(
words.word
).count()
However, to run this query for days, it’s necessary for the system to bound the amount
of intermediate in-memory state it accumulates. This means the system needs to know
when an old aggregate can be dropped from the in-memory state because the
application is not going to receive late data for that aggregate any more. To enable this,
in Spark 2.1, we have introduced watermarking, which lets the engine automatically
track the current event time in the data and attempt to clean up old state accordingly.
You can define the watermark of a query by specifying the event time column and the
threshold on how late the data is expected to be in terms of event time. For a specific
window ending at time T, the engine will maintain state and allow late data to update the
state until (max event time seen by the engine - late threshold > T). In other words, late
data within the threshold will be aggregated, but data later than the threshold will start
getting dropped (see later in the section for the exact guarantees). Let’s understand this
with an example. We can easily define watermarking on the previous example
using withWatermark() as shown below.
• Python
• Scala
• Java
• R
windowedCounts = words \
.groupBy(
words.word) \
.count()
In this example, we are defining the watermark of the query on the value of the column
“timestamp”, and also defining “10 minutes” as the threshold of how late is the data
allowed to be. If this query is run in Update output mode (discussed later in Output
Modes section), the engine will keep updating counts of a window in the Result Table
until the window is older than the watermark, which lags behind the current event time
in column “timestamp” by 10 minutes. Here is an illustration.
As shown in the illustration, the maximum event time tracked by the engine is the blue
dashed line, and the watermark set as (max event time - '10 mins') at the beginning of
every trigger is the red line. For example, when the engine observes the data (12:14,
dog), it sets the watermark for the next trigger as 12:04. This watermark lets the engine
maintain intermediate state for additional 10 minutes to allow late data to be counted.
For example, the data (12:09, cat) is out of order and late, and it falls in windows 12:00 -
12:10 and 12:05 - 12:15. Since, it is still ahead of the watermark 12:04 in the trigger, the
engine still maintains the intermediate counts as state and correctly updates the
counts of the related windows. However, when the watermark is updated to 12:11, the
intermediate state for window (12:00 - 12:10) is cleared, and all subsequent data
(e.g. (12:04, donkey)) is considered “too late” and therefore ignored. Note that after
every trigger, the updated counts (i.e. purple rows) are written to sink as the trigger
output, as dictated by the Update mode.
Some sinks (e.g. files) may not supported fine-grained updates that Update Mode
requires. To work with them, we have also support Append Mode, where only the final
counts are written to sink. This is illustrated below.
Similar to the Update Mode earlier, the engine maintains intermediate counts for each
window. However, the partial counts are not updated to the Result Table and not written
to sink. The engine waits for “10 mins” for late date to be counted, then drops
intermediate state of a window < watermark, and appends the final counts to the Result
Table/sink. For example, the final counts of window 12:00 - 12:10 is appended to the
Result Table only after the watermark is updated to 12:11.
Sliding windows are similar to the tumbling windows from the point of being “fixed-
sized”, but windows can overlap if the duration of slide is smaller than the duration of
window, and in this case an input can be bound to the multiple windows.
Tumbling and sliding window use window function, which has been described on above
examples.
Session windows have different characteristic compared to the previous two types.
Session window has a dynamic size of the window length, depending on the inputs. A
session window starts with an input, and expands itself if following input has been
received within gap duration. For static gap duration, a session window closes when
there’s no input received within gap duration after receiving the latest input.
Session window uses session_window function. The usage of the function is similar to
the window function.
• Python
• Scala
• Java
sessionizedCounts = events \
.groupBy(
events.userId) \
.count()
Instead of static value, we can also provide an expression to specify gap duration
dynamically based on the input row. Note that the rows with negative or zero gap
duration will be filtered out from the aggregation.
With dynamic gap duration, the closing of a session window does not depend on the
latest input anymore. A session window’s range is the union of all events’ ranges which
are determined by event start time and evaluated gap duration during the query
execution.
• Python
• Scala
• Java
session_window = session_window(events.timestamp, \
# Group the data by session window and userId, and compute the count of each group
sessionizedCounts = events \
session_window,
events.userId) \
.count()
Note that there are some restrictions when you use session window in streaming query,
like below:
For batch query, global window (only having session_window in grouping key) is
supported.
By default, Spark does not perform partial aggregation for session window aggregation,
since it requires additional sort in local partitions before grouping. It works better for the
case there are only few number of input rows in same group key for each local partition,
but for the case there are numerous input rows having same group key in local partition,
doing partial aggregation can still increase the performance significantly despite
additional sort.
You can
enable spark.sql.streaming.sessionWindow.merge.sessions.in.local.partition to
indicate Spark to perform partial aggregation.
In some use cases, it is necessary to extract the representation of the time for time
window, to apply operations requiring timestamp to the time windowed data. One
example is chained time window aggregations, where users want to define another time
window against the time window. Say, someone wants to aggregate 5 minutes time
windows as 1 hour tumble time window.
window_time function will produce a timestamp which represents the time for time
window. User can pass the result to the parameter of window function (or anywhere
requiring timestamp) to perform operation(s) with time window which requires
timestamp.
• Python
• Scala
• Java
# Group the data by window and word and compute the count of each group
windowedCounts = words.groupBy(
words.word
).count()
# Group the windowed data by another window and word and compute the count of
each group
anotherWindowedCounts = windowedCounts.groupBy(
windowedCounts.word
).count()
window function does not only take timestamp column, but also take the time window
column. This is specifically useful for cases where users want to apply chained time
window aggregations.
• Python
• Scala
• Java
# Group the data by window and word and compute the count of each group
windowedCounts = words.groupBy(
words.word
).count()
# Group the windowed data by another window and word and compute the count of
each group
anotherWindowedCounts = windowedCounts.groupBy(
windowedCounts.word
).count()
It is important to note that the following conditions must be satisfied for the
watermarking to clean the state in aggregation queries (as of Spark 2.1.1, subject to
change in the future).
• The aggregation must have either the event-time column, or a window on the
event-time column.
• withWatermark must be called before the aggregation for the watermark details
to be used. For example, df.groupBy("time").count().withWatermark("time", "1
min") is invalid in Append output mode.
• However, the guarantee is strict only in one direction. Data delayed by more than
2 hours is not guaranteed to be dropped; it may or may not get aggregated. More
delayed is the data, less likely is the engine going to process it.
1.11.Join Operations
Structured Streaming supports joining a streaming Dataset/DataFrame with a static
Dataset/DataFrame as well as another streaming Dataset/DataFrame. The result of the
streaming join is generated incrementally, similar to the results of streaming
aggregations in the previous section. In this section we will explore what type of joins
(i.e. inner, outer, semi, etc.) are supported in the above cases. Note that in all the
supported join types, the result of the join with a streaming Dataset/DataFrame will be
the exactly the same as if it was with a static Dataset/DataFrame containing the same
data in the stream.
Stream-static Joins
Since the introduction in Spark 2.0, Structured Streaming has supported joins (inner join
and some type of outer joins) between a streaming and a static DataFrame/Dataset.
Here is a simple example.
• Python
• Scala
• Java
• R
Note that stream-static joins are not stateful, so no state management is necessary.
However, a few types of stream-static outer joins are not yet supported. These are listed
at the end of this Join section.
Stream-stream Joins
In Spark 2.3, we have added support for stream-stream joins, that is, you can join two
streaming Datasets/DataFrames. The challenge of generating join results between two
data streams is that, at any point of time, the view of the dataset is incomplete for both
sides of the join making it much harder to find matches between inputs. Any row
received from one input stream can match with any future, yet-to-be-received row from
the other input stream. Hence, for both the input streams, we buffer past input as
streaming state, so that we can match every future input with past input and
accordingly generate joined results. Furthermore, similar to streaming aggregations, we
automatically handle late, out-of-order data and can limit the state using watermarks.
Let’s discuss the different types of supported stream-stream joins and how to use them.
Inner joins on any kind of columns along with any kind of join conditions are supported.
However, as the stream runs, the size of streaming state will keep growing indefinitely
as all past input must be saved as any new input can match with any input from the
past. To avoid unbounded state, you have to define additional join conditions such that
indefinitely old inputs cannot match with future inputs and therefore can be cleared
from the state. In other words, you will have to do the following additional steps in the
join.
1. Define watermark delays on both inputs such that the engine knows how
delayed the input can be (similar to streaming aggregations)
2. Define a constraint on event-time across the two inputs such that the engine can
figure out when old rows of one input is not going to be required (i.e. will not
satisfy the time constraint) for matches with the other input. This constraint can
be defined in one of the two ways.
1. Watermark delays: Say, the impressions and the corresponding clicks can be
late/out-of-order in event-time by at most 2 and 3 hours, respectively.
2. Event-time range condition: Say, a click can occur within a time range of 0
seconds to 1 hour after the corresponding impression.
• Python
• Scala
• Java
• R
impressionsWithWatermark.join(
clicksWithWatermark,
expr("""
""")
While the watermark + event-time constraints is optional for inner joins, for outer joins
they must be specified. This is because for generating the NULL results in outer join, the
engine must know when an input row is not going to match with anything in future.
Hence, the watermark + event-time constraints must be specified for generating correct
results. Therefore, a query with outer-join will look quite like the ad-monetization
example earlier, except that there will be an additional parameter specifying it to be an
outer-join.
• Python
• Scala
• Java
• R
impressionsWithWatermark.join(
clicksWithWatermark,
expr("""
"""),
Outer joins have the same guarantees as inner joins regarding watermark delays and
whether data will be dropped or not.
Caveats
There are a few important characteristics to note regarding how the outer results are
generated.
• The outer NULL results will be generated with a delay that depends on the
specified watermark delay and the time range condition. This is because the
engine has to wait for that long to ensure there were no matches and there will
be no more matches in future.
A semi join returns values from the left side of the relation that has a match with the
right. It is also referred to as a left semi join. Similar to outer joins, watermark + event-
time constraints must be specified for semi join. This is to evict unmatched input rows
on left side, the engine must know when an input row on left side is not going to match
with anything on right side in future.
Semi joins have the same guarantees as inner joins regarding watermark delays and
whether data will be dropped or not.
Left
Supported, not stateful
Outer
Right
Not supported
Stream Static Outer
Full
Not supported
Outer
Left
Supported, not stateful
Semi
Left
Not supported
Static Stream Outer
Right
Supported, not stateful
Outer
Left Right Join
Input Input Type
Full
Not supported
Outer
Left
Not supported
Semi
• Joins can be cascaded, that is, you can do df1.join(df2, ...).join(df3, ...).join(df4,
....).
• As of Spark 2.4, you can use joins only when the query is in Append output mode.
Other output modes are not yet supported.
In append output mode, you can construct a query having non-map-like operations e.g.
aggregation, deduplication, stream-stream join before/after join.
For example, here’s an example of time window aggregation in both streams followed by
stream-stream join with event time window:
• Scala
• Java
• Python
.count()
.count()
Here’s another example of stream-stream join with time range join condition followed
by time window aggregation:
• Scala
• Java
• Python
clicksWithWatermark,
expr("""
"""),
joined
.count()
1.12.Streaming Deduplication
You can deduplicate records in data streams using a unique identifier in the events. This
is exactly same as deduplication on static using a unique identifier column. The query
will store the necessary amount of data from previous records such that it can filter
duplicate records. Similar to aggregations, you can use deduplication with or without
watermarking.
• With watermark - If there is an upper bound on how late a duplicate record may
arrive, then you can define a watermark on an event time column and
deduplicate using both the guid and the event time columns. The query will use
the watermark to remove old state data from past records that are not expected
to get any duplicates any more. This bounds the amount of the state the query
has to maintain.
• Without watermark - Since there are no bounds on when a duplicate record may
arrive, the query stores the data from all the past records as state.
• Python
• Scala
• Java
• R
streamingDf.dropDuplicates("guid")
streamingDf \
.dropDuplicates("guid", "eventTime")
Specifically for streaming, you can deduplicate records in data streams using a unique
identifier in the events, within the time range of watermark. For example, if you set the
delay threshold of watermark as “1 hour”, duplicated events which occurred within 1
hour can be correctly deduplicated. (For more details, please refer to the API doc
of dropDuplicatesWithinWatermark.)
This can be used to deal with use case where event time column cannot be a part of
unique identifier, mostly due to the case where event times are somehow different for
the same records. (E.g. non-idempotent writer where issuing event time happens at
write)
Users are encouraged to set the delay threshold of watermark longer than max
timestamp differences among duplicated events.
• Python
• Scala
• Java
streamingDf \
.dropDuplicatesWithinWatermark("guid")