Pyspark Practice - Databricks
Pyspark Practice - Databricks
Pyspark practice
(https://databricks.com)
Union and UnionByName transformation
Concept :-
union works when the columns of both DataFrames being joined are in the same order. It can give surprisingly wrong
results when the schemas aren’t the same.
unionByName works when both DataFrames have the same columns, but in a different order.
union and unionByName transformation are used to merge two or more dataframes of the same schema and structure.
Syntax of unionByName()
Syntax: data_frame1.union(data_frame2)
Where,
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 1/66
30/05/2025, 08:13 Pyspark practice - Databricks
Example 1
americans = spark.createDataFrame(
[("bob", 42), ("lisa", 59)], ["first_name", "age"]
)
colombians = spark.createDataFrame(
[("maria", 20), ("camilo", 31)], ["first_name", "age"]
)
res = americans.union(colombians)
res.show()
+----------+---+
|first_name|age|
+----------+---+
| bob| 42|
| lisa| 59|
| maria| 20|
| camilo| 31|
+----------+---+
+---+-------+----------+------+
| id| name|department|salary|
+---+-------+----------+------+
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 2/66
30/05/2025, 08:13 Pyspark practice - Databricks
+---+-------+----------+------+
| id| name|department|salary|
+---+-------+----------+------+
| 1|Krishna| IT| 10000|
+---+-------+----------+------+
df1.union(df2).show()
+---+-------+----------+------+
| id| name|department|salary|
+---+-------+----------+------+
| 1|Krishna| IT| male|
| 1|Krishna| IT| 10000|
+---+-------+----------+------+
df1.unionByName(df2, allowMissingColumns=True).show()
# if columns will miss then also it will union. like here was a difference gender and salary.
+---+-------+----------+------+
| id| name|department|salary|
+---+-------+----------+------+
| 1|Krishna| IT| male|
| 1|Krishna| IT| 10000|
+---+-------+----------+------+
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 3/66
30/05/2025, 08:13 Pyspark practice - Databricks
Remember - If number will mismatch(if different column name or different number of column) then we will use
unionByName with True.
Example 2
# union
data_frame1 = spark.createDataFrame(
[("Nitya", 82.98), ("Abhishek", 80.31)],
["Student Name", "Overall Percentage"]
)
# union()
UnionEXP = data_frame1.union(data_frame2)
UnionEXP.show()
+------------+------------------+
|Student Name|Overall Percentage|
+------------+------------------+
| Nitya| 82.98|
| Abhishek| 80.31|
| Sandeep| 91.123|
| Rakesh| 90.51|
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 4/66
30/05/2025, 08:13 Pyspark practice - Databricks
+------------+------------------+
Syntax: data_frame1.unionByName(data_frame2)
data_frame1 = spark.createDataFrame(
[("Nitya", 82.98), ("Abhishek", 80.31)],
["Student Name", "Overall Percentage"]
)
byName.show()
see in this example , data_frame1 and data_frame2 are of different schema but the output is the desired one.
+------------+------------------+
|Student Name|Overall Percentage|
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 5/66
30/05/2025, 08:13 Pyspark practice - Databricks
+------------+------------------+
| Nitya| 82.98|
| Abhishek| 80.31|
| Naveen| 91.123|
| Sandeep| 90.51|
| Rakesh| 87.67|
+------------+------------------+
data_frame1 = spark.createDataFrame(
[("Bhuwanesh", 82.98, "Computer Science"), ("Harshit", 80.31, "Information Technology")],["Student Name", "Overall
Percentage", "Department"]
)
column_name_morein1df.show()
in this example we have more columnname in 1st dataframe but less in 2nd dataframe.
but we are able to do our desired result with the help of unionByName().
+------------+------------------+--------------------+
|Student Name|Overall Percentage| Department|
+------------+------------------+--------------------+
| Bhuwanesh| 82.98| Computer Science|
| Harshit| 80.31|Information Techn...|
| Naveen| 91.123| null|
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 6/66
30/05/2025, 08:13 Pyspark practice - Databricks
Window function:
pySpark window functions are useful when you want to examine relationships within group of data rather than
between group of data. It performs statictical operations like below explained.
PySpark Window function performs statistical operations such as rank, row number, etc. on a group, frame, or
collection of rows and returns results for each row individually. It is also popularly growing to perform data
transformations.
Analytical Function
Ranking Function
Aggregate Function
Analytical functions:
An analytic function is a function that returns a result after operating on data or a finite set of rows partitioned
by a SELECT clause or in the ORDER BY clause. It returns a result in the same number of rows as the number of input
rows. E.g. lead(), lag(), cume_dist().
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 7/66
30/05/2025, 08:13 Pyspark practice - Databricks
root
+-------------+---+----------+------+
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 8/66
30/05/2025, 08:13 Pyspark practice - Databricks
|Employee_Name|Age|Department|Salary|
+-------------+---+----------+------+
| Nitya| 28| Sales| 3000|
| Abhishek| 33| Sales| 4600|
| Sandeep| 40| Sales| 4100|
| Rakesh| 25| Finance| 3000|
| Ram| 28| Sales| 3000|
| Srishti| 46|Management| 3300|
| Arbind| 26| Finance| 3900|
| Hitesh| 30| Marketing| 3000|
| Kailash| 29| Marketing| 2000|
| Sushma| 39| Sales| 4100|
+-------------+---+----------+------+
Table
Employee_Name Age Department Salary
1 Nitya 28 Sales 3000
2 Abhishek 33 Sales 4600
10 rows
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 9/66
30/05/2025, 08:13 Pyspark practice - Databricks
Using cume_dist():
cume_dist() window function is used to get the cumulative distribution within a window partition.
Table
Employee_Name Age Department Salary cume_dist
1 Rakesh 25 Finance 3000 0.5
2 Arbind 26 Finance 3900 1
3 Srishti 46 Management 3300 1
4 Kailash 29 Marketing 2000 0.5
5 Hitesh 30 Marketing 3000 1
6 Nitya 28 Sales 3000 0.4
7 Ram 28 Sales 3000 04
10 rows
Using lag()
A lag() function is used to access previous rows’ data as per the defined offset value in the function.
Table
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 10/66
30/05/2025, 08:13 Pyspark practice - Databricks
Using lead()
A lead() function is used to access next rows data as per the defined offset value in the function.
Table
Employee_Name Age Department Salary Lead
1 Rakesh 25 Finance 3000 null
2 Arbind 26 Finance 3900 null
3 Srishti 46 Management 3300 null
4 Kailash 29 Marketing 2000 null
5 Hitesh 30 Marketing 3000 null
6 Nitya 28 Sales 3000 4600
7 Ram 28 Sales 3000 4100
10 rows
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 11/66
30/05/2025, 08:13 Pyspark practice - Databricks
Ranking function
Ranking Function
The function returns the statistical rank of a given value for each row in a partition or group. The goal of this
function is to provide consecutive numbering of the rows in the resultant column, set by the order selected in the
Window.partition for each partition specified in the OVER clause. E.g. row_number(), rank(), dense_rank(), etc.
A. row_number() : row_number() window functions is used to give the sequential row number starting from 1 to the
result of each
window partition.
B. rank():
rank window functions is used to provide a rank to the result within a window partition. this functions leaves
gaps in rank when there are ties.
Example :- 1 1 1 4 this is rank
C. dense_rank(): dense_rank() window function is used to get the result with rank of rows within a window partition
without a gaps.
This is similar to rank function difference being rank function leaves the gaps in rank when there
are ties.
Example : - 1 1 1 2 this is dense rank.
DataFrame.withColumn("col_name", Window_function().over(Window_partition))
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 12/66
30/05/2025, 08:13 Pyspark practice - Databricks
root
|-- Roll_No: long (nullable = true)
|-- Student_Name: string (nullable = true)
|-- Subject: string (nullable = true)
|-- Marks: long (nullable = true)
Table
Roll_No Student_Name Subject Marks
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 13/66
30/05/2025, 08:13 Pyspark practice - Databricks
101 R Bi l 80
2 103 Sita Social Science 78
3 104 Lakshman Sanskrit 58
4 102 Kunal Phisycs 89
5 101 Ram Biology 80
6 106 Srishti Maths 70
7 108 Sandeep Physics 75
8 107 Hitesh Maths 88
9 109 Kailash Maths 90
10 105 Abhishek Social Science 84
10 rows
Using row_number().
row_number() function is used to gives a sequential number to each row present in the table.
Table
Roll_No Student_Name Subject Marks row_number
1 101 Ram Biology 80 1
2 101 Ram Biology 80 2
3 106 Srishti Maths 70 1
4 107 Hitesh Maths 88 2
5 109 Kailash Maths 90 3
6 102 Kunal Phisycs 89 1
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 14/66
30/05/2025, 08:13 Pyspark practice - Databricks
10 rows
Using rank()
The rank function is used to give ranks to rows specified in the window partition. This function leaves gaps in rank
if there are ties.
Table
Roll_No Student_Name Subject Marks rank
1 101 Ram Biology 80 1
2 101 Ram Biology 80 1
3 106 Srishti Maths 70 1
4 107 Hitesh Maths 88 2
5 109 Kailash Maths 90 3
6 102 Kunal Phisycs 89 1
7 108 Sandeep Physics 75 1
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 15/66
30/05/2025, 08:13 Pyspark practice - Databricks
10 rows
Using percent_rank()
This function is similar to rank() function. It also provides rank to rows but in a percentile format.
Table
Roll_No Student_Name Subject Marks percent_rank
1 101 Ram Biology 80 0
2 101 Ram Biology 80 0
3 106 Srishti Maths 70 0
4 107 Hitesh Maths 88 0.5
5 109 Kailash Maths 90 1
6 102 Kunal Phisycs 89 0
7 108 Sandeep Physics 75 0
10 rows
Using dense_rank()
This function is used to get the rank of each row in the form of row numbers. This is similar to rank() function,
there is only one difference the rank function leaves gaps in rank when there are ties.
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 16/66
30/05/2025, 08:13 Pyspark practice - Databricks
Table
Roll_No Student_Name Subject Marks dense_rank
1 101 Ram Biology 80 1
2 101 Ram Biology 80 1
3 106 Srishti Maths 70 1
4 107 Hitesh Maths 88 2
5 109 Kailash Maths 90 3
6 102 Kunal Phisycs 89 1
7 108 Sandeep Physics 75 1
10 rows
Aggregate functions
Aggregate function
An aggregate function or aggregation function is a function where the values of multiple rows are grouped to form a
single summary value. The definition of the groups of rows on which they operate is done by using the SQL GROUP BY
clause. E.g. AVERAGE, SUM, MIN, MAX, etc.
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 17/66
30/05/2025, 08:13 Pyspark practice - Databricks
root
|-- Employee_Name: string (nullable = true)
|-- Department: string (nullable = true)
|-- Salary: long (nullable = true)
Table
Employee_Name Department Salary
1 Ram Sales 3000
2 Meena Sales 4600
3 Abhishek Sales 4100
4 Kunal Finance 3000
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 18/66
30/05/2025, 08:13 Pyspark practice - Databricks
10 rows
+-------------+----------+------+------+
|Employee_Name|Department|Salary| Avg|
+-------------+----------+------+------+
| Kunal| Finance| 3000|3450.0|
| Sandeep| Finance| 3900|3450.0|
| Srishti|Management| 3300|3300.0|
| Hitesh| Marketing| 3000|2500.0|
| Kailash| Marketing| 2000|2500.0|
| Ram| Sales| 3000|3760.0|
| Meena| Sales| 4600|3760.0|
| Abhishek| Sales| 4100|3760.0|
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 19/66
30/05/2025, 08:13 Pyspark practice - Databricks
+-------------+----------+------+-----+
|Employee_Name|Department|Salary| Sum|
+-------------+----------+------+-----+
| Kunal| Finance| 3000| 6900|
| Sandeep| Finance| 3900| 6900|
spark = SparkSession.builder \
.master("local[*]") \
.appName("timestamp") \
.getOrCreate()
df = spark.createDataFrame([["1", "2019-07-01 12:01:19.000"], ["2", "2019-06-24 12:01:19.000"]], ["id",
"input_timestamp"])
df.printSchema()
df.display()
root
|-- id: string (nullable = true)
|-- input_timestamp: string (nullable = true)
Table
id input_timestamp
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 20/66
30/05/2025, 08:13 Pyspark practice - Databricks
1 1 2019-07-01 12:01:19.000
2 2 2019-06-24 12:01:19.000
2 rows
Table
id input_timestamp timestamptype
1 1 2019-07-01 12:01:19.000 2019-07-01T12:01:19.000+0000
2 2 2019-06-24 12:01:19.000 2019-06-24T12:01:19.000+0000
2 rows
Table
id input_timestamp
1 1 2019-07-01T12:01:19.000+0000
2 2 2019-06-24T12:01:19.000+0000
2 rows
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 21/66
30/05/2025, 08:13 Pyspark practice - Databricks
df3=df2.select(col("id"), col("input_timestamp").cast('string'))
df3.display()
Table
id input_timestamp
1 1 2019-07-01 12:01:19
2 2 2019-06-24 12:01:19
2 rows
Table
id to_date(input_timestamp)
1 1 2019-07-01
2 2 2019-06-24
2 rows
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 22/66
30/05/2025, 08:13 Pyspark practice - Databricks
Table
employee_name department Salary
1 Nitya Sales 3000
2 Abhi Sales 4600
3 Rakesh Sales 4100
4 Sandeep finance 3000
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 23/66
30/05/2025, 08:13 Pyspark practice - Databricks
9 rows
windowSpec = Window.partitionBy("department").orderBy("salary")
df1 = df.withColumn("row", row_number().over(windowSpec)) # applying row_number
df1.display()
Table
employee_name department Salary row
1 Nitya Sales 3000 1
2 Abhishek Sales 3000 2
3 Rakesh Sales 4100 3
4 Abhi Sales 4600 4
5 Sandeep finance 3000 1
6 Shyan finance 3300 2
7 Madan finance 3900 3
8 kumar marketing 2000 1
9 Jarin marketing 3000 2
9 rows
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 24/66
30/05/2025, 08:13 Pyspark practice - Databricks
Table
employee_name department Salary row
1 Nitya Sales 3000 1
2 Abhishek Sales 3000 2
3 Sandeep finance 3000 1
4 Shyan finance 3300 2
5 kumar marketing 2000 1
6 Jarin marketing 3000 2
6 rows
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 25/66
30/05/2025, 08:13 Pyspark practice - Databricks
spark = SparkSession \
.builder \
.appName("droppingDublicates") \
.master("local[*]") \
.getOrCreate()
sample_data = ([1, "ramesh", 1000], [2, "Krishna", 2000], [3, "Shri", 3000], [4, "Pradip", 4000],
[1, "ramesh", 1000], [2, "Krishna", 2000], [3, "Shri", 3000], [4, "Pradip", 4000])
Table
id name salary
1 1 ramesh 1000
2 2 Krishna 2000
3 3 Shri 3000
4 4 Pradip 4000
5 1 ramesh 1000
6 2 Krishna 2000
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 26/66
30/05/2025, 08:13 Pyspark practice - Databricks
7 3 Shri 3000
8 4 Pradip 4000
8 rows
df1= df.distinct().show()
+---+-------+------+
| id| name|salary|
+---+-------+------+
| 1| ramesh| 1000|
| 2|Krishna| 2000|
| 3| Shri| 3000|
| 4| Pradip| 4000|
+---+-------+------+
df3 = df.dropDuplicates().show()
+---+-------+------+
| id| name|salary|
+---+-------+------+
| 1| ramesh| 1000|
| 2|Krishna| 2000|
| 3| Shri| 3000|
| 4| Pradip| 4000|
+---+-------+------+
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 27/66
30/05/2025, 08:13 Pyspark practice - Databricks
+---+-------+
| id| name|
+---+-------+
| 1| ramesh|
| 2|Krishna|
| 3| Shri|
| 4| Pradip|
+---+-------+
df.printSchema()
df.show()
df.select(df.name, explode(df.subjects)).show(truncate=False)
df.select(df.name, flatten(df.subjects)).show(truncate=False)
root
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 28/66
30/05/2025, 08:13 Pyspark practice - Databricks
+--------+--------------------+
| name| subjects|
+--------+--------------------+
|Abhishek|[[Java, scala, pe...|
| Nitya|[[spark, java, c+...|
| Sandeep|[[csharp, vb], [s...|
+--------+--------------------+
+--------+-------------------+
|name |col |
+--------+-------------------+
|Abhishek|[Java, scala, perl]|
|Abhishek|[spark, java] |
|Nitya |[spark, java, c++] |
| i |[ k j ] |
df.select(df.name, explode(df.subjects)).show(truncate=False)
+--------+-------------------+
|name |col |
+--------+-------------------+
|Abhishek|[Java, scala, perl]|
|Abhishek|[spark, java] |
|Nitya |[spark, java, c++] |
|Nitya |[spark, java] |
|Sandeep |[csharp, vb] |
|Sandeep |[spark, python] |
+--------+-------------------+
df.select(df.name, flatten(df.subjects)).show(truncate=False)
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 29/66
30/05/2025, 08:13 Pyspark practice - Databricks
+--------+--------------------------------+
|name |flatten(subjects) |
+--------+--------------------------------+
|Abhishek|[Java, scala, perl, spark, java]|
|Nitya |[spark, java, c++, spark, java] |
|Sandeep |[csharp, vb, spark, python] |
+--------+--------------------------------+
+---+--------+--------+
| id| name| marks|
+---+--------+--------+
| 1|Abhishek|10|30|40|
| 2| Krishna|50|40|70|
| 3| rakesh|20|70|90|
+---+--------+--------+
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 30/66
30/05/2025, 08:13 Pyspark practice - Databricks
Table
id name marks mark_details maths physics chemistry
1 Abhishek 10|30|40 ["10", "30", 10 30 40
1
"40"]
2 Krishna 50|40|70 ["50", "40", 50 40 70
2
"70"]
3 rows
Table
id name maths physics chemistry
1 1 Abhishek 10 30 40
2 2 Krishna 50 40 70
3 3 rakesh 20 70 90
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 31/66
30/05/2025, 08:13 Pyspark practice - Databricks
3 rows
+-----+-------+
|empid|empname|
+-----+-------+
| 10|Krishna|
| 20| mahesh|
| 30| Rakesh|
+-----+-------+
print(df.schema.fieldNames())
['empid', 'empname']
columns = df.schema.fieldNames()
if columns.count('empid')>0:
print('empid exists in the dataframe')
else:
print('not exists')
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 32/66
30/05/2025, 08:13 Pyspark practice - Databricks
pySpark Join
Join is used to combine two or more dataframes based on columns in the dataframe.
where-
Table
ID NAME Company
1 1 Saroj company 1
2 2 Nitya company 1
3 3 Abhishek company 2
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 33/66
30/05/2025, 08:13 Pyspark practice - Databricks
4 4 Sandeep company 1
5 5 Rakesh company 1
5 rows
Table
ID salary department
1 1 45000 IT
2 2 145000 Manager
3 6 45000 HR
4 5 34000 Sales
4 rows
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 34/66
30/05/2025, 08:13 Pyspark practice - Databricks
Inner Join
Inner join
This will join the two PySpark dataframes on key columns, which are common in both dataframes.
Table
ID NAME Company ID salary department
1 1 sravan company 1 1 45000 IT
2 2 ojaswi company 1 2 145000 Manager
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 35/66
30/05/2025, 08:13 Pyspark practice - Databricks
3 rows
Outer join
Full Outer Join
This join joins the two dataframes with all matching and non-matching rows, we can perform this join in three ways
Syntax:
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 36/66
30/05/2025, 08:13 Pyspark practice - Databricks
Table
ID NAME Company ID salary department
1 1 Nitya company 1 1 45000 IT
2 2 Ramesh company 1 2 145000 Manager
3 3 Abhishek company 2 null null null
4 4 Sandeep company 1 null null null
5 5 Manisha company 1 5 34000 Sales
6 null null null 6 45000 HR
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 37/66
30/05/2025, 08:13 Pyspark practice - Databricks
6 rows
Table
ID NAME Company ID salary department
1 1 Nitya company 1 1 45000 IT
2 2 Rakesh company 1 2 145000 Manager
3 3 Abhishek company 2 null null null
4 4 Anjali company 1 null null null
5 5 Saviya company 1 5 34000 Sales
6 null null null 6 45000 HR
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 38/66
30/05/2025, 08:13 Pyspark practice - Databricks
6 rows
Left Join
Here this join joins the dataframe by returning all rows from the first dataframe and only matched rows from the
second dataframe with respect to the first dataframe. We can perform this type of join using left and leftouter.
Syntax:
left: dataframe1.join(dataframe2,dataframe1.column_name == dataframe2.column_name,”left”)
leftouter: dataframe1.join(dataframe2,dataframe1.column_name == dataframe2.column_name,”leftouter”)
Table
ID NAME Company ID salary department
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 39/66
30/05/2025, 08:13 Pyspark practice - Databricks
5 rows
Right Join
Here this join joins the dataframe by returning all rows from the second dataframe and only matched rows from the
first dataframe with respect to the second dataframe. We can perform this type of join using right and rightouter.
Syntax:
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 40/66
30/05/2025, 08:13 Pyspark practice - Databricks
Table
ID NAME Company ID salary department
1 1 Manisha company 1 1 45000 IT
2 2 Aarti company 1 2 145000 Manager
3 null null null 6 45000 HR
4 5 Virat company 1 5 34000 Sales
4 rows
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 41/66
30/05/2025, 08:13 Pyspark practice - Databricks
Leftsemi join
This join will all rows from the first dataframe and return only matched rows from the second dataframe
Table
ID NAME Company
1 1 Mitchell company 1
2 2 Rachin company 1
3 5 Thomas company 1
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 42/66
30/05/2025, 08:13 Pyspark practice - Databricks
3 rows
LeftAnti join
This join returns only columns from the first dataframe for non-matched records of the second dataframe
Table
ID NAME Company
1 3 Rohit company 2
2 4 Srini company 1
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 43/66
30/05/2025, 08:13 Pyspark practice - Databricks
2 rows
SQL Expression
We can perform all types of the above joins using an SQL expression, we have to mention the type of join in this
expression. To do this, we have to create a temporary view.
Syntax: dataframe.createOrReplaceTempView(“name”)
where
where,
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 44/66
30/05/2025, 08:13 Pyspark practice - Databricks
Table
ID NAME Company ID salary department
1 1 Manoj company 1 1 45000 IT
2 2 Manisha company 1 2 145000 Manager
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 45/66
30/05/2025, 08:13 Pyspark practice - Databricks
3 rows
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 46/66
30/05/2025, 08:13 Pyspark practice - Databricks
+---+--------+---------+---+------+----------+
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 47/66
30/05/2025, 08:13 Pyspark practice - Databricks
Using functools
Functools module provides functions for working with other functions and callable objects to use or extend them
without completely rewriting them.
Syntax:
where,
+------+----------+------+------+
| Name| DOB|Gender|salary|
+------+----------+------+------+
| Ram|1991-04-01| M| 3000|
| Mike|2000-05-19| M| 4000|
|Rohini|1978-09-05| M| 4000|
| Maria|1967-12-01| F| 4000|
| Jenis|1980-02-17| F| 1200|
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 48/66
30/05/2025, 08:13 Pyspark practice - Databricks
+------+----------+------+------+
+------+----------+------+------+
| Name| DOB|Gender|salary|
+------+----------+------+------+
| Ram|1991-04-01| M| 3000|
| Mike|2000-05-19| M| 4000|
|Rohini|1978-09-05| M| 4000|
| Maria|1967-12-01| F| 4000|
| Jenis|1980-02-17| F| 1200|
+------+----------+------+------+
import functools
def unionAll(dfs):
return functools.reduce(lambda df1, df2: df1.union(
df2.select(df1.columns)), dfs)
result3 = unionAll([df1, df2])
result3.display()
Table
Name DOB Gender salary
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 49/66
30/05/2025, 08:13 Pyspark practice - Databricks
10 rows
df.filter(condition) : This function returns the new dataframe with the values which satisfies the given condition.
df.column_name.isNotNull() : This function is used to filter the rows that are not NULL/None in the dataframe column.
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 50/66
30/05/2025, 08:13 Pyspark practice - Databricks
how – This accepts any or all values. Drop a row if it includes NULLs in any column by using the ‘any’ operator. Drop
a row only if all columns contain NULL values if you use the ‘all’ option. The default value is ‘any’.
thresh – This is an int quantity; rows with less than thresh hold non-null values are dropped. ‘None’ is the default.
subset – This is used to select the columns that contain NULL values. ‘None’ is the default.
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 51/66
30/05/2025, 08:13 Pyspark practice - Databricks
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 52/66
30/05/2025, 08:13 Pyspark practice - Databricks
Parameters:
dataRDD: An RDD of any kind of SQL data representation(e.g. Row, tuple, int, boolean, etc.), or list, or
pandas.DataFrame.
schema: A datatype string or a list of column names, default is None.
samplingRatio: The sample ratio of rows used for inferring
verifySchema: Verify data types of every row against schema. Enabled by default.
Returns: Dataframe
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 53/66
30/05/2025, 08:13 Pyspark practice - Databricks
actor_data = [
("James", None, "Bond", "M", 6000),
("Michael", None, None, "M", 4000),
("Robert", None, "Pattinson", "M", 4000),
("Natalie", None, "Portman", "F", 4000),
("Julia", None, "Roberts", "F", 1000)
]
actor_schema = T.StructType([
T.StructField("firstname", T.StringType(), True),
T.StructField("middlename", T.StringType(), True),
T.StructField("lastname", T.StringType(), True),
T.StructField("gender", T.StringType(), True),
T.StructField("salary", T.IntegerType(), True)
])
df = spark.createDataFrame(data=actor_data, schema=actor_schema)
df.show(truncate=False)
+---------+----------+---------+------+------+
|firstname|middlename|lastname |gender|salary|
+---------+----------+---------+------+------+
|James |null |Bond |M |6000 |
|Michael |null |null |M |4000 |
|Robert |null |Pattinson|M |4000 |
|Natalie |null |Portman |F |4000 |
|Julia |null |Roberts |F |1000 |
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 54/66
30/05/2025, 08:13 Pyspark practice - Databricks
+---------+----------+---------+------+------+
import pyspark.sql.functions as F
null_counts = df.select([F.count(F.when(F.col(c).isNull(), c)).alias(
c) for c in df.columns]).collect()[0].asDict()
print(null_counts)
df_size = df.count()
to_drop = [k for k, v in null_counts.items() if v == df_size]
print(to_drop)
output_df = df.drop(*to_drop)
output_df.show(truncate=False)
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 55/66
30/05/2025, 08:13 Pyspark practice - Databricks
dataframe.show()
+---+-------+---------+
| ID| NAME| Company|
+---+-------+---------+
| 1| sravan|company 1|
| 2| ojaswi|company 1|
| 3| rohith|company 2|
| 4|sridevi|company 1|
| 1| sravan|company 1|
| 4|sridevi|company 1|
+---+-------+---------+
Syntax: dataframe.where(condition)
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 56/66
30/05/2025, 08:13 Pyspark practice - Databricks
dataframe.where(dataframe.ID=='1').show()
+---+------+---------+
| ID| NAME| Company|
+---+------+---------+
| 1|sravan|company 1|
| 1|sravan|company 1|
+---+------+---------+
dataframe.where(dataframe.NAME != 'sravan').show()
+---+-------+---------+
| ID| NAME| Company|
+---+-------+---------+
| 2| ojaswi|company 1|
| 3| rohith|company 2|
| 4|sridevi|company 1|
| 4|sridevi|company 1|
+---+-------+---------+
dataframe.filter(dataframe.ID>'3').show()
+---+-------+---------+
| ID| NAME| Company|
+---+-------+---------+
| 4|sridevi|company 1|
| 4|sridevi|company 1|
+---+-------+---------+
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 57/66
30/05/2025, 08:13 Pyspark practice - Databricks
+---+-------+-------+
| ID| NAME|college|
+---+-------+-------+
| 1| Nitya| vignan|
| 2| Nitesh| vvit|
| 3| Neha| vvit|
| 4| Neerak| vignan|
| 1|Neekung| vignan|
| 5| Neelam| iit|
+---+-------+-------+
dataframe.count()
Out[8]: 6
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 58/66
30/05/2025, 08:13 Pyspark practice - Databricks
# dataframe.where(condition)
print(dataframe.where(dataframe.ID == '1').count())
2
They are
+---+-------+-------+
| ID| NAME|college|
+---+-------+-------+
| 1| Nitya| vignan|
| 1|Neekung| vignan|
+---+-------+-------+
print(dataframe.where(dataframe.ID != '1').count())
print(dataframe.where(dataframe.college == 'vignan').count())
4
3
3
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 59/66
30/05/2025, 08:13 Pyspark practice - Databricks
product_data = [
(1, "Mencollection", 5, 50, 40),
(2, "Girlcollection", 5,5,5),
(3, "Childcollectgion", 2,10,10),
(4, "Womencollection", 4, 10, 20)
]
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 60/66
30/05/2025, 08:13 Pyspark practice - Databricks
result_df.show()
+-----+-----------+
| name|sum(volume)|
+-----+-----------+
|Room3| 800|
|Room2| 10125|
|Room1| 10325|
+-----+-----------+
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 61/66
30/05/2025, 08:13 Pyspark practice - Databricks
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 62/66
30/05/2025, 08:13 Pyspark practice - Databricks
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 63/66
30/05/2025, 08:13 Pyspark practice - Databricks
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 64/66
30/05/2025, 08:13 Pyspark practice - Databricks
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 65/66
30/05/2025, 08:13 Pyspark practice - Databricks
https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4602256280705183/69616450320427/8093088653480119/latest.html 66/66