split(str) function is used to convert a string column into an array of strings using a delimiter for the split. concat_ws() is the opposite of split. It creates a string column from an array of strings. The resulting array is concatenated with the provided delimiter.
pyspark functions used in this notebook are
rdd.createOrReplaceTempView, rdd.drop, spark.sql
First we load the important libraries
frompyspark.sqlimportSparkSessionfrompyspark.sql.functionsimport(col,concat_ws,split)
# initializing spark session instancespark=SparkSession.builder.appName('pyspark concat snippets').getOrCreate()
Then load our initial records
columns=["Full_Name","Salary"]data=[("Sam A Smith",1000),("Alex Wesley Jones",120000),("Steve Paul Jobs",5000)]
# converting data to rddsrdd=spark.sparkContext.parallelize(data)
# Then creating a dataframe from our rdd variabledfFromRDD2=spark.createDataFrame(rdd).toDF(*columns)
# visualizing current data before manipulationdfFromRDD2.show()
1) Here we are splitting the Full_Name Column containing first name, middle name and last name and adding a new column called Name_Parts
# here we add a new column called 'Name_Parts' and use space ' ' as the delimiter stringmodified_dfFromRDD2=dfFromRDD2.withColumn("Name_Parts",split(col('Full_Name'),' '))
# visualizing the modified dataframe modified_dfFromRDD2.show()
2) We can also use a SQL query to split the Full_Name column. For this, we need to use createOrReplaceTempView() to create a create a temporary view from the Dataframe. This view can be accessed till SparkContaxt is active.
# Below we use the SQL query to select the required columns. This includes the new column we create# by splitting the Full_Name column. dfFromRDD2.createOrReplaceTempView("SalaryData")modified_dfFromRDD3=spark.sql("select Full_Name, Salary, SPLIT(Full_Name,' ') as Name_Parts from SalaryData")
# visualizing the modified dataframe after executing the SQL query.# As you can see, it is exactly the same as the previous output.modified_dfFromRDD3.show(truncate=False)
Now we will use the above data frame for concat_ws function but will drop the Full_Name column. We will be recreating it using the concatenation operation
# Removing the Full_Name column using the drop functionmodified_dfFromRDD4=modified_dfFromRDD3.drop('Full_Name')
# visualizing the modified data framemodified_dfFromRDD4.show()
1) Here we are concatenating the Name_Parts Column containing first name, middle name and last name string elements and adding a new column called Full_Name
# here we add a new column called 'Full_Name' and use space ' ' as the delimiter string to concatenate the Name_Partsmodified_dfFromRDD5=modified_dfFromRDD4.withColumn("Full_Name",concat_ws(' ',col('Name_Parts')))
# visualizing the modified dataframe. # The Full_Name column is same as the one in the original data frame we started with above.modified_dfFromRDD5.show()
2) We can also use a SQL query to concatenate the Name_Parts column like we did for split() above. For this, we need to use createOrReplaceTempView() to create a create a temporary view from the Dataframe like we did before. We will then use that view to execute the concatenate query on.
# Below we use the SQL query to select the required columns. This includes the new column we create# by splitting the Full_Name column. modified_dfFromRDD4.createOrReplaceTempView("SalaryData2")modified_dfFromRDD6=spark.sql("select Salary, Name_Parts, CONCAT_WS(' ', Name_Parts) as Full_Name from SalaryData2")
# visualizing the modified dataframe after executing the SQL query.# As you can see, it is exactly the same as the previous output.modified_dfFromRDD6.show(truncate=False)
spark.stop()